00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 303 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 2968 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.069 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.069 The recommended git tool is: git 00:00:00.070 using credential 00000000-0000-0000-0000-000000000002 00:00:00.071 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.093 Fetching changes from the remote Git repository 00:00:00.095 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.121 Using shallow fetch with depth 1 00:00:00.121 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.143 > git --version # 'git version 2.39.2' 00:00:00.143 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.144 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.144 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:16.614 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:16.624 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:16.633 Checking out Revision dfe1eaf407377468c6843ea2a92bf64ac34e0f1c (FETCH_HEAD) 00:00:16.633 > git config core.sparsecheckout # timeout=10 00:00:16.641 > git read-tree -mu HEAD # timeout=10 00:00:16.654 > git checkout -f dfe1eaf407377468c6843ea2a92bf64ac34e0f1c # timeout=5 00:00:16.669 Commit message: "jenkins/config: adjust dsa-phy-autotest test flags" 00:00:16.669 > git rev-list --no-walk dfe1eaf407377468c6843ea2a92bf64ac34e0f1c # timeout=10 00:00:16.775 [Pipeline] Start of Pipeline 00:00:16.796 [Pipeline] library 00:00:16.798 Loading library shm_lib@master 00:00:16.798 Library shm_lib@master is cached. Copying from home. 00:00:16.810 [Pipeline] node 00:00:16.824 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:16.826 [Pipeline] { 00:00:16.834 [Pipeline] catchError 00:00:16.835 [Pipeline] { 00:00:16.844 [Pipeline] wrap 00:00:16.851 [Pipeline] { 00:00:16.858 [Pipeline] stage 00:00:16.859 [Pipeline] { (Prologue) 00:00:17.032 [Pipeline] sh 00:00:17.315 + logger -p user.info -t JENKINS-CI 00:00:17.330 [Pipeline] echo 00:00:17.331 Node: GP11 00:00:17.337 [Pipeline] sh 00:00:17.634 [Pipeline] setCustomBuildProperty 00:00:17.645 [Pipeline] echo 00:00:17.647 Cleanup processes 00:00:17.652 [Pipeline] sh 00:00:17.939 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.939 1950184 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:17.952 [Pipeline] sh 00:00:18.235 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:18.235 ++ grep -v 'sudo pgrep' 00:00:18.235 ++ awk '{print $1}' 00:00:18.235 + sudo kill -9 00:00:18.235 + true 00:00:18.250 [Pipeline] cleanWs 00:00:18.259 [WS-CLEANUP] Deleting project workspace... 00:00:18.259 [WS-CLEANUP] Deferred wipeout is used... 00:00:18.265 [WS-CLEANUP] done 00:00:18.269 [Pipeline] setCustomBuildProperty 00:00:18.281 [Pipeline] sh 00:00:18.564 + sudo git config --global --replace-all safe.directory '*' 00:00:18.648 [Pipeline] nodesByLabel 00:00:18.649 Found a total of 1 nodes with the 'sorcerer' label 00:00:18.659 [Pipeline] httpRequest 00:00:18.664 HttpMethod: GET 00:00:18.664 URL: http://10.211.164.101/packages/jbp_dfe1eaf407377468c6843ea2a92bf64ac34e0f1c.tar.gz 00:00:18.668 Sending request to url: http://10.211.164.101/packages/jbp_dfe1eaf407377468c6843ea2a92bf64ac34e0f1c.tar.gz 00:00:18.693 Response Code: HTTP/1.1 200 OK 00:00:18.693 Success: Status code 200 is in the accepted range: 200,404 00:00:18.694 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_dfe1eaf407377468c6843ea2a92bf64ac34e0f1c.tar.gz 00:00:33.749 [Pipeline] sh 00:00:34.036 + tar --no-same-owner -xf jbp_dfe1eaf407377468c6843ea2a92bf64ac34e0f1c.tar.gz 00:00:34.054 [Pipeline] httpRequest 00:00:34.059 HttpMethod: GET 00:00:34.059 URL: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:34.060 Sending request to url: http://10.211.164.101/packages/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:00:34.074 Response Code: HTTP/1.1 200 OK 00:00:34.075 Success: Status code 200 is in the accepted range: 200,404 00:00:34.075 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:01:25.789 [Pipeline] sh 00:01:26.076 + tar --no-same-owner -xf spdk_3b33f433344ee82a3d99d10cfd6af5729440114b.tar.gz 00:01:28.629 [Pipeline] sh 00:01:28.914 + git -C spdk log --oneline -n5 00:01:28.914 3b33f4333 test/nvme/cuse: Fix typo 00:01:28.914 bf784f7a1 test/nvme: Set SEL only when the field is supported 00:01:28.914 a5153247d autopackage: Slurp spdk-ld-path while building against native DPDK 00:01:28.914 b14fb7292 autopackage: Cut number of make jobs in half under clang+LTO 00:01:28.914 1d70a0c9e configure: Hint compiler at what linker to use via -fuse-ld 00:01:28.933 [Pipeline] withCredentials 00:01:28.943 > git --version # timeout=10 00:01:28.954 > git --version # 'git version 2.39.2' 00:01:28.970 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:28.972 [Pipeline] { 00:01:28.981 [Pipeline] retry 00:01:28.982 [Pipeline] { 00:01:29.000 [Pipeline] sh 00:01:29.286 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:29.298 [Pipeline] } 00:01:29.319 [Pipeline] // retry 00:01:29.324 [Pipeline] } 00:01:29.345 [Pipeline] // withCredentials 00:01:29.355 [Pipeline] httpRequest 00:01:29.360 HttpMethod: GET 00:01:29.360 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.362 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:29.373 Response Code: HTTP/1.1 200 OK 00:01:29.374 Success: Status code 200 is in the accepted range: 200,404 00:01:29.374 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:38.173 [Pipeline] sh 00:01:38.454 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:39.878 [Pipeline] sh 00:01:40.165 + git -C dpdk log --oneline -n5 00:01:40.165 eeb0605f11 version: 23.11.0 00:01:40.165 238778122a doc: update release notes for 23.11 00:01:40.165 46aa6b3cfc doc: fix description of RSS features 00:01:40.165 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:40.165 7e421ae345 devtools: support skipping forbid rule check 00:01:40.184 [Pipeline] } 00:01:40.217 [Pipeline] // stage 00:01:40.225 [Pipeline] stage 00:01:40.227 [Pipeline] { (Prepare) 00:01:40.248 [Pipeline] writeFile 00:01:40.263 [Pipeline] sh 00:01:40.546 + logger -p user.info -t JENKINS-CI 00:01:40.560 [Pipeline] sh 00:01:40.844 + logger -p user.info -t JENKINS-CI 00:01:40.857 [Pipeline] sh 00:01:41.142 + cat autorun-spdk.conf 00:01:41.142 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.142 SPDK_TEST_NVMF=1 00:01:41.142 SPDK_TEST_NVME_CLI=1 00:01:41.142 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.142 SPDK_TEST_NVMF_NICS=e810 00:01:41.142 SPDK_TEST_VFIOUSER=1 00:01:41.142 SPDK_RUN_UBSAN=1 00:01:41.142 NET_TYPE=phy 00:01:41.142 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.142 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.151 RUN_NIGHTLY=1 00:01:41.156 [Pipeline] readFile 00:01:41.184 [Pipeline] withEnv 00:01:41.186 [Pipeline] { 00:01:41.201 [Pipeline] sh 00:01:41.486 + set -ex 00:01:41.486 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:41.486 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:41.486 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:41.486 ++ SPDK_TEST_NVMF=1 00:01:41.486 ++ SPDK_TEST_NVME_CLI=1 00:01:41.486 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:41.486 ++ SPDK_TEST_NVMF_NICS=e810 00:01:41.486 ++ SPDK_TEST_VFIOUSER=1 00:01:41.486 ++ SPDK_RUN_UBSAN=1 00:01:41.486 ++ NET_TYPE=phy 00:01:41.486 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:41.486 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:41.486 ++ RUN_NIGHTLY=1 00:01:41.486 + case $SPDK_TEST_NVMF_NICS in 00:01:41.486 + DRIVERS=ice 00:01:41.486 + [[ tcp == \r\d\m\a ]] 00:01:41.486 + [[ -n ice ]] 00:01:41.486 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:41.486 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:41.486 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:41.486 rmmod: ERROR: Module irdma is not currently loaded 00:01:41.486 rmmod: ERROR: Module i40iw is not currently loaded 00:01:41.486 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:41.486 + true 00:01:41.486 + for D in $DRIVERS 00:01:41.486 + sudo modprobe ice 00:01:41.486 + exit 0 00:01:41.496 [Pipeline] } 00:01:41.513 [Pipeline] // withEnv 00:01:41.518 [Pipeline] } 00:01:41.533 [Pipeline] // stage 00:01:41.542 [Pipeline] catchError 00:01:41.543 [Pipeline] { 00:01:41.557 [Pipeline] timeout 00:01:41.557 Timeout set to expire in 40 min 00:01:41.558 [Pipeline] { 00:01:41.568 [Pipeline] stage 00:01:41.569 [Pipeline] { (Tests) 00:01:41.581 [Pipeline] sh 00:01:41.865 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.865 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.865 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.865 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:41.865 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:41.865 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:41.865 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:41.865 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:41.865 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:41.865 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:41.865 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:41.865 + source /etc/os-release 00:01:41.865 ++ NAME='Fedora Linux' 00:01:41.865 ++ VERSION='38 (Cloud Edition)' 00:01:41.865 ++ ID=fedora 00:01:41.865 ++ VERSION_ID=38 00:01:41.865 ++ VERSION_CODENAME= 00:01:41.865 ++ PLATFORM_ID=platform:f38 00:01:41.865 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:41.865 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:41.865 ++ LOGO=fedora-logo-icon 00:01:41.865 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:41.865 ++ HOME_URL=https://fedoraproject.org/ 00:01:41.865 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:41.865 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:41.865 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:41.865 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:41.865 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:41.865 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:41.865 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:41.865 ++ SUPPORT_END=2024-05-14 00:01:41.865 ++ VARIANT='Cloud Edition' 00:01:41.865 ++ VARIANT_ID=cloud 00:01:41.865 + uname -a 00:01:41.865 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:41.865 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:42.803 Hugepages 00:01:42.803 node hugesize free / total 00:01:42.803 node0 1048576kB 0 / 0 00:01:42.803 node0 2048kB 0 / 0 00:01:42.803 node1 1048576kB 0 / 0 00:01:42.803 node1 2048kB 0 / 0 00:01:42.803 00:01:42.803 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:42.803 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:42.803 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:42.803 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:42.803 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:42.803 + rm -f /tmp/spdk-ld-path 00:01:42.803 + source autorun-spdk.conf 00:01:42.803 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:42.803 ++ SPDK_TEST_NVMF=1 00:01:42.803 ++ SPDK_TEST_NVME_CLI=1 00:01:42.803 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:42.803 ++ SPDK_TEST_NVMF_NICS=e810 00:01:42.803 ++ SPDK_TEST_VFIOUSER=1 00:01:42.803 ++ SPDK_RUN_UBSAN=1 00:01:42.803 ++ NET_TYPE=phy 00:01:42.804 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:42.804 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:42.804 ++ RUN_NIGHTLY=1 00:01:42.804 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:42.804 + [[ -n '' ]] 00:01:42.804 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:42.804 + for M in /var/spdk/build-*-manifest.txt 00:01:42.804 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:42.804 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.804 + for M in /var/spdk/build-*-manifest.txt 00:01:42.804 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:42.804 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:42.804 ++ uname 00:01:42.804 + [[ Linux == \L\i\n\u\x ]] 00:01:42.804 + sudo dmesg -T 00:01:42.804 + sudo dmesg --clear 00:01:43.063 + dmesg_pid=1951487 00:01:43.063 + [[ Fedora Linux == FreeBSD ]] 00:01:43.063 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.063 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:43.063 + sudo dmesg -Tw 00:01:43.063 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:43.063 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:43.063 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:43.063 + [[ -x /usr/src/fio-static/fio ]] 00:01:43.063 + export FIO_BIN=/usr/src/fio-static/fio 00:01:43.063 + FIO_BIN=/usr/src/fio-static/fio 00:01:43.063 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:43.063 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:43.063 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:43.063 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.063 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:43.063 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:43.063 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.063 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:43.063 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:43.063 Test configuration: 00:01:43.063 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:43.063 SPDK_TEST_NVMF=1 00:01:43.063 SPDK_TEST_NVME_CLI=1 00:01:43.063 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:43.063 SPDK_TEST_NVMF_NICS=e810 00:01:43.063 SPDK_TEST_VFIOUSER=1 00:01:43.063 SPDK_RUN_UBSAN=1 00:01:43.063 NET_TYPE=phy 00:01:43.063 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:43.063 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.063 RUN_NIGHTLY=1 01:36:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:43.063 01:36:28 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:43.063 01:36:28 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:43.063 01:36:28 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:43.063 01:36:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.063 01:36:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.063 01:36:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.063 01:36:28 -- paths/export.sh@5 -- $ export PATH 00:01:43.063 01:36:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:43.063 01:36:28 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:43.063 01:36:28 -- common/autobuild_common.sh@435 -- $ date +%s 00:01:43.063 01:36:28 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713137788.XXXXXX 00:01:43.063 01:36:28 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713137788.G7VSlJ 00:01:43.063 01:36:28 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:01:43.063 01:36:28 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:01:43.063 01:36:28 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.063 01:36:28 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:43.063 01:36:28 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:43.063 01:36:28 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:43.063 01:36:28 -- common/autobuild_common.sh@451 -- $ get_config_params 00:01:43.063 01:36:28 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:01:43.063 01:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.063 01:36:28 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:43.063 01:36:28 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:43.063 01:36:28 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:43.064 01:36:28 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.064 01:36:28 -- spdk/autobuild.sh@16 -- $ date -u 00:01:43.064 Sun Apr 14 11:36:28 PM UTC 2024 00:01:43.064 01:36:28 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:43.064 LTS-20-g3b33f4333 00:01:43.064 01:36:28 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:43.064 01:36:28 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:43.064 01:36:28 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:43.064 01:36:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:43.064 01:36:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:43.064 01:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.064 ************************************ 00:01:43.064 START TEST ubsan 00:01:43.064 ************************************ 00:01:43.064 01:36:28 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:01:43.064 using ubsan 00:01:43.064 00:01:43.064 real 0m0.000s 00:01:43.064 user 0m0.000s 00:01:43.064 sys 0m0.000s 00:01:43.064 01:36:28 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:43.064 01:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.064 ************************************ 00:01:43.064 END TEST ubsan 00:01:43.064 ************************************ 00:01:43.064 01:36:28 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:43.064 01:36:28 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:43.064 01:36:28 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:01:43.064 01:36:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:43.064 01:36:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:43.064 ************************************ 00:01:43.064 START TEST build_native_dpdk 00:01:43.064 ************************************ 00:01:43.064 01:36:28 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:43.064 01:36:28 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:43.064 01:36:28 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:43.064 01:36:28 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:43.064 01:36:28 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:43.064 01:36:28 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:43.064 01:36:28 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:43.064 01:36:28 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:43.064 01:36:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:43.064 01:36:28 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:43.064 01:36:28 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:43.064 01:36:28 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.064 01:36:28 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:43.064 01:36:28 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:43.064 eeb0605f11 version: 23.11.0 00:01:43.064 238778122a doc: update release notes for 23.11 00:01:43.064 46aa6b3cfc doc: fix description of RSS features 00:01:43.064 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:43.064 7e421ae345 devtools: support skipping forbid rule check 00:01:43.064 01:36:28 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:43.064 01:36:28 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:43.064 01:36:28 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:43.064 01:36:28 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:43.064 01:36:28 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:43.064 01:36:28 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:43.064 01:36:28 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:43.064 01:36:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:43.064 01:36:28 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:43.064 01:36:28 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:43.064 01:36:28 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:43.064 01:36:28 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:43.064 01:36:28 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:43.064 01:36:28 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:43.064 01:36:28 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:43.064 01:36:28 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:43.064 01:36:28 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:43.064 01:36:28 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:43.064 01:36:28 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:43.064 01:36:28 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:43.064 01:36:28 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:43.064 01:36:28 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:43.064 01:36:28 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:43.064 01:36:28 -- scripts/common.sh@343 -- $ case "$op" in 00:01:43.064 01:36:28 -- scripts/common.sh@344 -- $ : 1 00:01:43.064 01:36:28 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:43.064 01:36:28 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:43.064 01:36:28 -- scripts/common.sh@364 -- $ decimal 23 00:01:43.064 01:36:28 -- scripts/common.sh@352 -- $ local d=23 00:01:43.064 01:36:28 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:43.064 01:36:28 -- scripts/common.sh@354 -- $ echo 23 00:01:43.064 01:36:28 -- scripts/common.sh@364 -- $ ver1[v]=23 00:01:43.064 01:36:28 -- scripts/common.sh@365 -- $ decimal 21 00:01:43.064 01:36:28 -- scripts/common.sh@352 -- $ local d=21 00:01:43.064 01:36:28 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:43.064 01:36:28 -- scripts/common.sh@354 -- $ echo 21 00:01:43.064 01:36:28 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:43.064 01:36:28 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:43.064 01:36:28 -- scripts/common.sh@366 -- $ return 1 00:01:43.064 01:36:28 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:43.064 patching file config/rte_config.h 00:01:43.064 Hunk #1 succeeded at 60 (offset 1 line). 00:01:43.064 01:36:28 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:43.064 01:36:28 -- common/autobuild_common.sh@178 -- $ uname -s 00:01:43.064 01:36:28 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:43.064 01:36:28 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:43.064 01:36:28 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:47.265 The Meson build system 00:01:47.265 Version: 1.3.1 00:01:47.265 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:47.265 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:47.265 Build type: native build 00:01:47.265 Program cat found: YES (/usr/bin/cat) 00:01:47.265 Project name: DPDK 00:01:47.265 Project version: 23.11.0 00:01:47.265 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.265 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:47.265 Host machine cpu family: x86_64 00:01:47.265 Host machine cpu: x86_64 00:01:47.265 Message: ## Building in Developer Mode ## 00:01:47.265 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:47.265 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:47.265 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:47.265 Program python3 found: YES (/usr/bin/python3) 00:01:47.265 Program cat found: YES (/usr/bin/cat) 00:01:47.265 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:47.265 Compiler for C supports arguments -march=native: YES 00:01:47.265 Checking for size of "void *" : 8 00:01:47.265 Checking for size of "void *" : 8 (cached) 00:01:47.265 Library m found: YES 00:01:47.265 Library numa found: YES 00:01:47.265 Has header "numaif.h" : YES 00:01:47.265 Library fdt found: NO 00:01:47.265 Library execinfo found: NO 00:01:47.265 Has header "execinfo.h" : YES 00:01:47.265 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.265 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:47.265 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:47.265 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:47.265 Run-time dependency openssl found: YES 3.0.9 00:01:47.265 Run-time dependency libpcap found: YES 1.10.4 00:01:47.265 Has header "pcap.h" with dependency libpcap: YES 00:01:47.265 Compiler for C supports arguments -Wcast-qual: YES 00:01:47.265 Compiler for C supports arguments -Wdeprecated: YES 00:01:47.265 Compiler for C supports arguments -Wformat: YES 00:01:47.265 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:47.265 Compiler for C supports arguments -Wformat-security: NO 00:01:47.265 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.265 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:47.265 Compiler for C supports arguments -Wnested-externs: YES 00:01:47.265 Compiler for C supports arguments -Wold-style-definition: YES 00:01:47.265 Compiler for C supports arguments -Wpointer-arith: YES 00:01:47.265 Compiler for C supports arguments -Wsign-compare: YES 00:01:47.265 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:47.265 Compiler for C supports arguments -Wundef: YES 00:01:47.265 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.265 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:47.265 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:47.265 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.265 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:47.265 Program objdump found: YES (/usr/bin/objdump) 00:01:47.265 Compiler for C supports arguments -mavx512f: YES 00:01:47.265 Checking if "AVX512 checking" compiles: YES 00:01:47.265 Fetching value of define "__SSE4_2__" : 1 00:01:47.265 Fetching value of define "__AES__" : 1 00:01:47.265 Fetching value of define "__AVX__" : 1 00:01:47.265 Fetching value of define "__AVX2__" : (undefined) 00:01:47.265 Fetching value of define "__AVX512BW__" : (undefined) 00:01:47.265 Fetching value of define "__AVX512CD__" : (undefined) 00:01:47.265 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:47.265 Fetching value of define "__AVX512F__" : (undefined) 00:01:47.265 Fetching value of define "__AVX512VL__" : (undefined) 00:01:47.265 Fetching value of define "__PCLMUL__" : 1 00:01:47.265 Fetching value of define "__RDRND__" : 1 00:01:47.265 Fetching value of define "__RDSEED__" : (undefined) 00:01:47.265 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:47.265 Fetching value of define "__znver1__" : (undefined) 00:01:47.265 Fetching value of define "__znver2__" : (undefined) 00:01:47.265 Fetching value of define "__znver3__" : (undefined) 00:01:47.265 Fetching value of define "__znver4__" : (undefined) 00:01:47.265 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:47.265 Message: lib/log: Defining dependency "log" 00:01:47.265 Message: lib/kvargs: Defining dependency "kvargs" 00:01:47.265 Message: lib/telemetry: Defining dependency "telemetry" 00:01:47.265 Checking for function "getentropy" : NO 00:01:47.265 Message: lib/eal: Defining dependency "eal" 00:01:47.265 Message: lib/ring: Defining dependency "ring" 00:01:47.265 Message: lib/rcu: Defining dependency "rcu" 00:01:47.265 Message: lib/mempool: Defining dependency "mempool" 00:01:47.265 Message: lib/mbuf: Defining dependency "mbuf" 00:01:47.265 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:47.265 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.265 Compiler for C supports arguments -mpclmul: YES 00:01:47.265 Compiler for C supports arguments -maes: YES 00:01:47.265 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:47.265 Compiler for C supports arguments -mavx512bw: YES 00:01:47.265 Compiler for C supports arguments -mavx512dq: YES 00:01:47.265 Compiler for C supports arguments -mavx512vl: YES 00:01:47.265 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:47.265 Compiler for C supports arguments -mavx2: YES 00:01:47.265 Compiler for C supports arguments -mavx: YES 00:01:47.265 Message: lib/net: Defining dependency "net" 00:01:47.265 Message: lib/meter: Defining dependency "meter" 00:01:47.265 Message: lib/ethdev: Defining dependency "ethdev" 00:01:47.265 Message: lib/pci: Defining dependency "pci" 00:01:47.265 Message: lib/cmdline: Defining dependency "cmdline" 00:01:47.265 Message: lib/metrics: Defining dependency "metrics" 00:01:47.265 Message: lib/hash: Defining dependency "hash" 00:01:47.265 Message: lib/timer: Defining dependency "timer" 00:01:47.265 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:47.265 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:47.265 Message: lib/acl: Defining dependency "acl" 00:01:47.265 Message: lib/bbdev: Defining dependency "bbdev" 00:01:47.265 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:47.265 Run-time dependency libelf found: YES 0.190 00:01:47.265 Message: lib/bpf: Defining dependency "bpf" 00:01:47.265 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:47.265 Message: lib/compressdev: Defining dependency "compressdev" 00:01:47.265 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:47.265 Message: lib/distributor: Defining dependency "distributor" 00:01:47.265 Message: lib/dmadev: Defining dependency "dmadev" 00:01:47.265 Message: lib/efd: Defining dependency "efd" 00:01:47.265 Message: lib/eventdev: Defining dependency "eventdev" 00:01:47.265 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:47.265 Message: lib/gpudev: Defining dependency "gpudev" 00:01:47.265 Message: lib/gro: Defining dependency "gro" 00:01:47.265 Message: lib/gso: Defining dependency "gso" 00:01:47.265 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:47.265 Message: lib/jobstats: Defining dependency "jobstats" 00:01:47.265 Message: lib/latencystats: Defining dependency "latencystats" 00:01:47.265 Message: lib/lpm: Defining dependency "lpm" 00:01:47.265 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:47.265 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:47.265 Message: lib/member: Defining dependency "member" 00:01:47.265 Message: lib/pcapng: Defining dependency "pcapng" 00:01:47.265 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:47.265 Message: lib/power: Defining dependency "power" 00:01:47.265 Message: lib/rawdev: Defining dependency "rawdev" 00:01:47.265 Message: lib/regexdev: Defining dependency "regexdev" 00:01:47.265 Message: lib/mldev: Defining dependency "mldev" 00:01:47.265 Message: lib/rib: Defining dependency "rib" 00:01:47.265 Message: lib/reorder: Defining dependency "reorder" 00:01:47.265 Message: lib/sched: Defining dependency "sched" 00:01:47.265 Message: lib/security: Defining dependency "security" 00:01:47.265 Message: lib/stack: Defining dependency "stack" 00:01:47.265 Has header "linux/userfaultfd.h" : YES 00:01:47.265 Has header "linux/vduse.h" : YES 00:01:47.265 Message: lib/vhost: Defining dependency "vhost" 00:01:47.265 Message: lib/ipsec: Defining dependency "ipsec" 00:01:47.265 Message: lib/pdcp: Defining dependency "pdcp" 00:01:47.265 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:47.265 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:47.265 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:47.265 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:47.265 Message: lib/fib: Defining dependency "fib" 00:01:47.265 Message: lib/port: Defining dependency "port" 00:01:47.265 Message: lib/pdump: Defining dependency "pdump" 00:01:47.265 Message: lib/table: Defining dependency "table" 00:01:47.265 Message: lib/pipeline: Defining dependency "pipeline" 00:01:47.265 Message: lib/graph: Defining dependency "graph" 00:01:47.265 Message: lib/node: Defining dependency "node" 00:01:48.645 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:48.645 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:48.645 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:48.645 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:48.645 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:48.645 Compiler for C supports arguments -Wno-unused-value: YES 00:01:48.645 Compiler for C supports arguments -Wno-format: YES 00:01:48.645 Compiler for C supports arguments -Wno-format-security: YES 00:01:48.645 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:48.645 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:48.645 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:48.645 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:48.645 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:48.645 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:48.645 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:48.645 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:48.645 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:48.645 Has header "sys/epoll.h" : YES 00:01:48.645 Program doxygen found: YES (/usr/bin/doxygen) 00:01:48.645 Configuring doxy-api-html.conf using configuration 00:01:48.645 Configuring doxy-api-man.conf using configuration 00:01:48.645 Program mandb found: YES (/usr/bin/mandb) 00:01:48.645 Program sphinx-build found: NO 00:01:48.645 Configuring rte_build_config.h using configuration 00:01:48.645 Message: 00:01:48.645 ================= 00:01:48.645 Applications Enabled 00:01:48.645 ================= 00:01:48.645 00:01:48.645 apps: 00:01:48.645 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:48.645 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:48.645 test-pmd, test-regex, test-sad, test-security-perf, 00:01:48.645 00:01:48.645 Message: 00:01:48.645 ================= 00:01:48.645 Libraries Enabled 00:01:48.645 ================= 00:01:48.645 00:01:48.645 libs: 00:01:48.645 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:48.645 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:48.645 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:48.645 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:48.645 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:48.645 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:48.645 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:48.645 00:01:48.645 00:01:48.645 Message: 00:01:48.645 =============== 00:01:48.645 Drivers Enabled 00:01:48.645 =============== 00:01:48.645 00:01:48.645 common: 00:01:48.645 00:01:48.645 bus: 00:01:48.645 pci, vdev, 00:01:48.645 mempool: 00:01:48.645 ring, 00:01:48.645 dma: 00:01:48.645 00:01:48.645 net: 00:01:48.645 i40e, 00:01:48.645 raw: 00:01:48.646 00:01:48.646 crypto: 00:01:48.646 00:01:48.646 compress: 00:01:48.646 00:01:48.646 regex: 00:01:48.646 00:01:48.646 ml: 00:01:48.646 00:01:48.646 vdpa: 00:01:48.646 00:01:48.646 event: 00:01:48.646 00:01:48.646 baseband: 00:01:48.646 00:01:48.646 gpu: 00:01:48.646 00:01:48.646 00:01:48.646 Message: 00:01:48.646 ================= 00:01:48.646 Content Skipped 00:01:48.646 ================= 00:01:48.646 00:01:48.646 apps: 00:01:48.646 00:01:48.646 libs: 00:01:48.646 00:01:48.646 drivers: 00:01:48.646 common/cpt: not in enabled drivers build config 00:01:48.646 common/dpaax: not in enabled drivers build config 00:01:48.646 common/iavf: not in enabled drivers build config 00:01:48.646 common/idpf: not in enabled drivers build config 00:01:48.646 common/mvep: not in enabled drivers build config 00:01:48.646 common/octeontx: not in enabled drivers build config 00:01:48.646 bus/auxiliary: not in enabled drivers build config 00:01:48.646 bus/cdx: not in enabled drivers build config 00:01:48.646 bus/dpaa: not in enabled drivers build config 00:01:48.646 bus/fslmc: not in enabled drivers build config 00:01:48.646 bus/ifpga: not in enabled drivers build config 00:01:48.646 bus/platform: not in enabled drivers build config 00:01:48.646 bus/vmbus: not in enabled drivers build config 00:01:48.646 common/cnxk: not in enabled drivers build config 00:01:48.646 common/mlx5: not in enabled drivers build config 00:01:48.646 common/nfp: not in enabled drivers build config 00:01:48.646 common/qat: not in enabled drivers build config 00:01:48.646 common/sfc_efx: not in enabled drivers build config 00:01:48.646 mempool/bucket: not in enabled drivers build config 00:01:48.646 mempool/cnxk: not in enabled drivers build config 00:01:48.646 mempool/dpaa: not in enabled drivers build config 00:01:48.646 mempool/dpaa2: not in enabled drivers build config 00:01:48.646 mempool/octeontx: not in enabled drivers build config 00:01:48.646 mempool/stack: not in enabled drivers build config 00:01:48.646 dma/cnxk: not in enabled drivers build config 00:01:48.646 dma/dpaa: not in enabled drivers build config 00:01:48.646 dma/dpaa2: not in enabled drivers build config 00:01:48.646 dma/hisilicon: not in enabled drivers build config 00:01:48.646 dma/idxd: not in enabled drivers build config 00:01:48.646 dma/ioat: not in enabled drivers build config 00:01:48.646 dma/skeleton: not in enabled drivers build config 00:01:48.646 net/af_packet: not in enabled drivers build config 00:01:48.646 net/af_xdp: not in enabled drivers build config 00:01:48.646 net/ark: not in enabled drivers build config 00:01:48.646 net/atlantic: not in enabled drivers build config 00:01:48.646 net/avp: not in enabled drivers build config 00:01:48.646 net/axgbe: not in enabled drivers build config 00:01:48.646 net/bnx2x: not in enabled drivers build config 00:01:48.646 net/bnxt: not in enabled drivers build config 00:01:48.646 net/bonding: not in enabled drivers build config 00:01:48.646 net/cnxk: not in enabled drivers build config 00:01:48.646 net/cpfl: not in enabled drivers build config 00:01:48.646 net/cxgbe: not in enabled drivers build config 00:01:48.646 net/dpaa: not in enabled drivers build config 00:01:48.646 net/dpaa2: not in enabled drivers build config 00:01:48.646 net/e1000: not in enabled drivers build config 00:01:48.646 net/ena: not in enabled drivers build config 00:01:48.646 net/enetc: not in enabled drivers build config 00:01:48.646 net/enetfec: not in enabled drivers build config 00:01:48.646 net/enic: not in enabled drivers build config 00:01:48.646 net/failsafe: not in enabled drivers build config 00:01:48.646 net/fm10k: not in enabled drivers build config 00:01:48.646 net/gve: not in enabled drivers build config 00:01:48.646 net/hinic: not in enabled drivers build config 00:01:48.646 net/hns3: not in enabled drivers build config 00:01:48.646 net/iavf: not in enabled drivers build config 00:01:48.646 net/ice: not in enabled drivers build config 00:01:48.646 net/idpf: not in enabled drivers build config 00:01:48.646 net/igc: not in enabled drivers build config 00:01:48.646 net/ionic: not in enabled drivers build config 00:01:48.646 net/ipn3ke: not in enabled drivers build config 00:01:48.646 net/ixgbe: not in enabled drivers build config 00:01:48.646 net/mana: not in enabled drivers build config 00:01:48.646 net/memif: not in enabled drivers build config 00:01:48.646 net/mlx4: not in enabled drivers build config 00:01:48.646 net/mlx5: not in enabled drivers build config 00:01:48.646 net/mvneta: not in enabled drivers build config 00:01:48.646 net/mvpp2: not in enabled drivers build config 00:01:48.646 net/netvsc: not in enabled drivers build config 00:01:48.646 net/nfb: not in enabled drivers build config 00:01:48.646 net/nfp: not in enabled drivers build config 00:01:48.646 net/ngbe: not in enabled drivers build config 00:01:48.646 net/null: not in enabled drivers build config 00:01:48.646 net/octeontx: not in enabled drivers build config 00:01:48.646 net/octeon_ep: not in enabled drivers build config 00:01:48.646 net/pcap: not in enabled drivers build config 00:01:48.646 net/pfe: not in enabled drivers build config 00:01:48.646 net/qede: not in enabled drivers build config 00:01:48.646 net/ring: not in enabled drivers build config 00:01:48.646 net/sfc: not in enabled drivers build config 00:01:48.646 net/softnic: not in enabled drivers build config 00:01:48.646 net/tap: not in enabled drivers build config 00:01:48.646 net/thunderx: not in enabled drivers build config 00:01:48.646 net/txgbe: not in enabled drivers build config 00:01:48.646 net/vdev_netvsc: not in enabled drivers build config 00:01:48.646 net/vhost: not in enabled drivers build config 00:01:48.646 net/virtio: not in enabled drivers build config 00:01:48.646 net/vmxnet3: not in enabled drivers build config 00:01:48.646 raw/cnxk_bphy: not in enabled drivers build config 00:01:48.646 raw/cnxk_gpio: not in enabled drivers build config 00:01:48.646 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:48.646 raw/ifpga: not in enabled drivers build config 00:01:48.646 raw/ntb: not in enabled drivers build config 00:01:48.646 raw/skeleton: not in enabled drivers build config 00:01:48.646 crypto/armv8: not in enabled drivers build config 00:01:48.646 crypto/bcmfs: not in enabled drivers build config 00:01:48.646 crypto/caam_jr: not in enabled drivers build config 00:01:48.646 crypto/ccp: not in enabled drivers build config 00:01:48.646 crypto/cnxk: not in enabled drivers build config 00:01:48.646 crypto/dpaa_sec: not in enabled drivers build config 00:01:48.646 crypto/dpaa2_sec: not in enabled drivers build config 00:01:48.646 crypto/ipsec_mb: not in enabled drivers build config 00:01:48.646 crypto/mlx5: not in enabled drivers build config 00:01:48.646 crypto/mvsam: not in enabled drivers build config 00:01:48.646 crypto/nitrox: not in enabled drivers build config 00:01:48.646 crypto/null: not in enabled drivers build config 00:01:48.646 crypto/octeontx: not in enabled drivers build config 00:01:48.646 crypto/openssl: not in enabled drivers build config 00:01:48.646 crypto/scheduler: not in enabled drivers build config 00:01:48.646 crypto/uadk: not in enabled drivers build config 00:01:48.646 crypto/virtio: not in enabled drivers build config 00:01:48.646 compress/isal: not in enabled drivers build config 00:01:48.646 compress/mlx5: not in enabled drivers build config 00:01:48.646 compress/octeontx: not in enabled drivers build config 00:01:48.646 compress/zlib: not in enabled drivers build config 00:01:48.646 regex/mlx5: not in enabled drivers build config 00:01:48.646 regex/cn9k: not in enabled drivers build config 00:01:48.646 ml/cnxk: not in enabled drivers build config 00:01:48.646 vdpa/ifc: not in enabled drivers build config 00:01:48.646 vdpa/mlx5: not in enabled drivers build config 00:01:48.646 vdpa/nfp: not in enabled drivers build config 00:01:48.646 vdpa/sfc: not in enabled drivers build config 00:01:48.646 event/cnxk: not in enabled drivers build config 00:01:48.646 event/dlb2: not in enabled drivers build config 00:01:48.646 event/dpaa: not in enabled drivers build config 00:01:48.646 event/dpaa2: not in enabled drivers build config 00:01:48.646 event/dsw: not in enabled drivers build config 00:01:48.646 event/opdl: not in enabled drivers build config 00:01:48.646 event/skeleton: not in enabled drivers build config 00:01:48.646 event/sw: not in enabled drivers build config 00:01:48.646 event/octeontx: not in enabled drivers build config 00:01:48.646 baseband/acc: not in enabled drivers build config 00:01:48.646 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:48.646 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:48.646 baseband/la12xx: not in enabled drivers build config 00:01:48.646 baseband/null: not in enabled drivers build config 00:01:48.646 baseband/turbo_sw: not in enabled drivers build config 00:01:48.646 gpu/cuda: not in enabled drivers build config 00:01:48.646 00:01:48.646 00:01:48.646 Build targets in project: 220 00:01:48.646 00:01:48.646 DPDK 23.11.0 00:01:48.646 00:01:48.646 User defined options 00:01:48.646 libdir : lib 00:01:48.646 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:48.646 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:48.646 c_link_args : 00:01:48.646 enable_docs : false 00:01:48.646 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:48.646 enable_kmods : false 00:01:48.646 machine : native 00:01:48.646 tests : false 00:01:48.646 00:01:48.646 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:48.646 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:48.646 01:36:34 -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:48.646 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:48.920 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:48.920 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:48.920 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:48.920 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:48.920 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:48.920 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:48.920 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:48.920 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:48.920 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:48.920 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:48.920 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:48.920 [12/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:48.920 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:48.920 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:48.920 [15/710] Linking static target lib/librte_kvargs.a 00:01:48.920 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:49.178 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:49.178 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:49.178 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:49.178 [20/710] Linking static target lib/librte_log.a 00:01:49.178 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:49.443 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.703 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:49.703 [24/710] Linking target lib/librte_log.so.24.0 00:01:49.965 [25/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:49.965 [26/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:49.965 [27/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:49.965 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:49.965 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:49.965 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:49.965 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:49.965 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:49.965 [33/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:49.965 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:49.965 [35/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:49.965 [36/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:49.965 [37/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:49.965 [38/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:49.965 [39/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:49.965 [40/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:49.965 [41/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:49.965 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:49.965 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:49.965 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:49.965 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:49.965 [46/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:49.965 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:50.230 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:50.231 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:50.231 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:50.231 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:50.231 [52/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:50.231 [53/710] Linking target lib/librte_kvargs.so.24.0 00:01:50.231 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:50.231 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:50.231 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:50.231 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:50.231 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:50.231 [59/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:50.231 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:50.231 [61/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:50.231 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:50.231 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:50.490 [64/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:50.490 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:50.490 [66/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:50.490 [67/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:50.755 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:50.755 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:50.755 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:50.755 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:50.755 [72/710] Linking static target lib/librte_pci.a 00:01:50.755 [73/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:50.755 [74/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:50.755 [75/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:51.015 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:51.015 [77/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:51.015 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:51.015 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:51.015 [80/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.015 [81/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:51.015 [82/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:51.015 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:51.015 [84/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:51.015 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:51.015 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:51.015 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:51.277 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:51.277 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:51.277 [90/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:51.277 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:51.277 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:51.277 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:51.277 [94/710] Linking static target lib/librte_ring.a 00:01:51.277 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:51.277 [96/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:51.277 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:51.277 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:51.277 [99/710] Linking static target lib/librte_meter.a 00:01:51.277 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:51.277 [101/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:51.536 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:51.536 [103/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:51.536 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:51.536 [105/710] Linking static target lib/librte_telemetry.a 00:01:51.536 [106/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:51.536 [107/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:51.536 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:51.536 [109/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:51.536 [110/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:51.536 [111/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:51.536 [112/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:51.536 [113/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:51.536 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:51.536 [115/710] Linking static target lib/librte_eal.a 00:01:51.536 [116/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.796 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:51.796 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:51.796 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:51.796 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:51.796 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:51.796 [122/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:51.796 [123/710] Linking static target lib/librte_net.a 00:01:51.796 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:51.796 [125/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:52.065 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:52.065 [127/710] Linking static target lib/librte_cmdline.a 00:01:52.065 [128/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.065 [129/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:52.065 [130/710] Linking static target lib/librte_mempool.a 00:01:52.065 [131/710] Linking target lib/librte_telemetry.so.24.0 00:01:52.065 [132/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:52.326 [133/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:52.326 [134/710] Linking static target lib/librte_cfgfile.a 00:01:52.326 [135/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.326 [136/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:52.326 [137/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:52.326 [138/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:52.326 [139/710] Linking static target lib/librte_metrics.a 00:01:52.326 [140/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:52.326 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:52.326 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:52.326 [143/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:52.589 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:52.589 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:52.589 [146/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:52.589 [147/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:52.589 [148/710] Linking static target lib/librte_bitratestats.a 00:01:52.589 [149/710] Linking static target lib/librte_rcu.a 00:01:52.589 [150/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:52.853 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:52.853 [152/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:52.853 [153/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.853 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:52.853 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:52.853 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.853 [157/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:52.853 [158/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:52.853 [159/710] Linking static target lib/librte_timer.a 00:01:53.116 [160/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:53.116 [161/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:53.116 [162/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.116 [163/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.116 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.116 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:53.116 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:53.116 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:53.116 [168/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.380 [169/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:53.380 [170/710] Linking static target lib/librte_bbdev.a 00:01:53.380 [171/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:53.380 [172/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:53.380 [173/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:53.380 [174/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:53.644 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:53.644 [176/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:53.644 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:53.644 [178/710] Linking static target lib/librte_compressdev.a 00:01:53.644 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:53.644 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:53.908 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:53.908 [182/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:53.908 [183/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:53.908 [184/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:53.908 [185/710] Linking static target lib/librte_distributor.a 00:01:54.172 [186/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:54.172 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.172 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:54.172 [189/710] Linking static target lib/librte_bpf.a 00:01:54.434 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:54.434 [191/710] Linking static target lib/librte_dmadev.a 00:01:54.434 [192/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:54.434 [193/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:54.434 [194/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.434 [195/710] Linking static target lib/librte_dispatcher.a 00:01:54.434 [196/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:54.434 [197/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.434 [198/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:54.434 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:54.703 [200/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:54.703 [201/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:54.703 [202/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:54.703 [203/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:54.703 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:54.703 [205/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:54.703 [206/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:54.703 [207/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:54.703 [208/710] Linking static target lib/librte_gpudev.a 00:01:54.703 [209/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:54.703 [210/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:54.703 [211/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:54.703 [212/710] Linking static target lib/librte_gro.a 00:01:54.703 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:54.703 [214/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.703 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:54.967 [216/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:54.967 [217/710] Linking static target lib/librte_jobstats.a 00:01:54.967 [218/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:54.967 [219/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:54.967 [220/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:55.231 [221/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.231 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.231 [223/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:55.231 [224/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:55.231 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.231 [226/710] Linking static target lib/librte_latencystats.a 00:01:55.495 [227/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:55.495 [228/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:55.495 [229/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:55.495 [230/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:55.495 [231/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:55.495 [232/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:55.495 [233/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:55.769 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:55.769 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:55.769 [236/710] Linking static target lib/librte_ip_frag.a 00:01:55.769 [237/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.769 [238/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:55.769 [239/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.769 [240/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:55.769 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.029 [242/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.029 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:56.029 [244/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.029 [245/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.029 [246/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:56.029 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:56.295 [248/710] Linking static target lib/librte_gso.a 00:01:56.295 [249/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:56.295 [250/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.295 [251/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:56.295 [252/710] Linking static target lib/librte_regexdev.a 00:01:56.295 [253/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:56.295 [254/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:56.554 [255/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:56.554 [256/710] Linking static target lib/librte_rawdev.a 00:01:56.554 [257/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.554 [258/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:56.554 [259/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:56.554 [260/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:56.554 [261/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:56.554 [262/710] Linking static target lib/librte_efd.a 00:01:56.554 [263/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:56.554 [264/710] Linking static target lib/librte_pcapng.a 00:01:56.554 [265/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:56.554 [266/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:56.554 [267/710] Linking static target lib/librte_mldev.a 00:01:56.817 [268/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:56.817 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:56.817 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:56.817 [271/710] Linking static target lib/librte_stack.a 00:01:56.817 [272/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:56.817 [273/710] Linking static target lib/acl/libavx2_tmp.a 00:01:56.817 [274/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:56.817 [275/710] Linking static target lib/librte_lpm.a 00:01:57.080 [276/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.080 [277/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.080 [278/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.080 [279/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:57.080 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.080 [281/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.080 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.080 [283/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:57.080 [284/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.080 [285/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.080 [286/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.080 [287/710] Linking static target lib/librte_hash.a 00:01:57.340 [288/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.340 [289/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:57.340 [290/710] Linking static target lib/acl/libavx512_tmp.a 00:01:57.340 [291/710] Linking static target lib/librte_acl.a 00:01:57.340 [292/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.340 [293/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.340 [294/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.340 [295/710] Linking static target lib/librte_power.a 00:01:57.340 [296/710] Linking static target lib/librte_reorder.a 00:01:57.340 [297/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.605 [298/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.605 [299/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.605 [300/710] Linking static target lib/librte_security.a 00:01:57.605 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:57.870 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.870 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.870 [304/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:57.870 [305/710] Linking static target lib/librte_rib.a 00:01:57.870 [306/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.870 [307/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.870 [308/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:57.870 [309/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:57.870 [310/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.870 [311/710] Linking static target lib/librte_mbuf.a 00:01:57.870 [312/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:57.870 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:58.134 [314/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.134 [315/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:58.134 [316/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:58.134 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:58.134 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.134 [319/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:58.134 [320/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:58.399 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:58.399 [322/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:58.399 [323/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:58.399 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:58.399 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:58.399 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.399 [327/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:58.399 [328/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.664 [329/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:58.664 [330/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.665 [331/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:58.665 [332/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.924 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:59.190 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:59.190 [335/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:59.190 [336/710] Linking static target lib/librte_eventdev.a 00:01:59.190 [337/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:59.190 [338/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:59.190 [339/710] Linking static target lib/librte_member.a 00:01:59.190 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:59.450 [341/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:59.450 [342/710] Linking static target lib/librte_cryptodev.a 00:01:59.450 [343/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:59.450 [344/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:59.450 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:59.450 [346/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:59.450 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:59.450 [348/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:59.450 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:59.450 [350/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:59.450 [351/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:59.450 [352/710] Linking static target lib/librte_sched.a 00:01:59.715 [353/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:59.715 [354/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:59.715 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:59.715 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:59.715 [357/710] Linking static target lib/librte_fib.a 00:01:59.715 [358/710] Linking static target lib/librte_ethdev.a 00:01:59.715 [359/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.715 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:59.715 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:59.715 [362/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:59.715 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:59.975 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:59.975 [365/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:59.975 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:59.975 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:00.241 [368/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:00.241 [369/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:00.241 [370/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.241 [371/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:00.241 [372/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:00.241 [373/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:00.504 [374/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:00.504 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:00.504 [376/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:00.504 [377/710] Linking static target lib/librte_pdump.a 00:02:00.766 [378/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:00.766 [379/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:00.766 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:00.766 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:00.766 [382/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:00.766 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:00.766 [384/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:00.766 [385/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:00.766 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:00.766 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:01.032 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:01.032 [389/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:01.032 [390/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.032 [391/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:01.032 [392/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:01.032 [393/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:01.032 [394/710] Linking static target lib/librte_ipsec.a 00:02:01.296 [395/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.296 [396/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:01.296 [397/710] Linking static target lib/librte_table.a 00:02:01.296 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:01.296 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:01.556 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:01.556 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.824 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:01.824 [403/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.824 [404/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:01.824 [405/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:02.109 [406/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:02.109 [407/710] Linking target lib/librte_eal.so.24.0 00:02:02.109 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:02.109 [409/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:02.109 [410/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:02.109 [411/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:02.109 [412/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:02.109 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:02.381 [414/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:02.381 [415/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:02.381 [416/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:02.381 [417/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:02.381 [418/710] Linking target lib/librte_ring.so.24.0 00:02:02.381 [419/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.381 [420/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.381 [421/710] Linking target lib/librte_meter.so.24.0 00:02:02.381 [422/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:02.381 [423/710] Linking target lib/librte_pci.so.24.0 00:02:02.381 [424/710] Linking target lib/librte_timer.so.24.0 00:02:02.381 [425/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:02.645 [426/710] Linking target lib/librte_acl.so.24.0 00:02:02.645 [427/710] Linking target lib/librte_cfgfile.so.24.0 00:02:02.645 [428/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:02.645 [429/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:02.645 [430/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:02.645 [431/710] Linking target lib/librte_dmadev.so.24.0 00:02:02.645 [432/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:02.645 [433/710] Linking target lib/librte_rcu.so.24.0 00:02:02.645 [434/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:02.645 [435/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:02.645 [436/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:02.645 [437/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:02.645 [438/710] Linking target lib/librte_mempool.so.24.0 00:02:02.645 [439/710] Linking target lib/librte_jobstats.so.24.0 00:02:02.645 [440/710] Linking target lib/librte_rawdev.so.24.0 00:02:02.645 [441/710] Linking static target lib/librte_port.a 00:02:02.645 [442/710] Linking target lib/librte_stack.so.24.0 00:02:02.645 [443/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.645 [444/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:02.645 [445/710] Linking static target drivers/librte_bus_vdev.a 00:02:02.645 [446/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:02.915 [447/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:02.915 [448/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:02.915 [449/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:02.915 [450/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:02.915 [451/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:03.181 [452/710] Linking target lib/librte_mbuf.so.24.0 00:02:03.181 [453/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:03.181 [454/710] Linking static target lib/librte_graph.a 00:02:03.181 [455/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:03.181 [456/710] Linking target lib/librte_rib.so.24.0 00:02:03.181 [457/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:03.181 [458/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.181 [459/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:03.181 [460/710] Linking static target drivers/librte_bus_pci.a 00:02:03.181 [461/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.181 [462/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:03.181 [463/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:03.181 [464/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:03.448 [465/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:03.448 [466/710] Linking target lib/librte_net.so.24.0 00:02:03.448 [467/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:03.448 [468/710] Linking target lib/librte_bbdev.so.24.0 00:02:03.448 [469/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:03.448 [470/710] Linking target lib/librte_compressdev.so.24.0 00:02:03.448 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:03.448 [472/710] Linking target lib/librte_distributor.so.24.0 00:02:03.448 [473/710] Linking target lib/librte_cryptodev.so.24.0 00:02:03.448 [474/710] Linking target lib/librte_gpudev.so.24.0 00:02:03.448 [475/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.448 [476/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:03.448 [477/710] Linking target lib/librte_regexdev.so.24.0 00:02:03.710 [478/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:03.710 [479/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:03.710 [480/710] Linking target lib/librte_mldev.so.24.0 00:02:03.710 [481/710] Linking target lib/librte_reorder.so.24.0 00:02:03.710 [482/710] Linking target lib/librte_sched.so.24.0 00:02:03.710 [483/710] Linking target lib/librte_fib.so.24.0 00:02:03.710 [484/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:03.710 [485/710] Linking target lib/librte_cmdline.so.24.0 00:02:03.710 [486/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:03.710 [487/710] Linking target lib/librte_hash.so.24.0 00:02:03.710 [488/710] Linking target lib/librte_security.so.24.0 00:02:03.975 [489/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:03.975 [490/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:03.975 [491/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:03.975 [492/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:03.975 [493/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.975 [494/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.975 [495/710] Linking static target drivers/librte_mempool_ring.a 00:02:03.975 [496/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:03.975 [497/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:03.975 [498/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:03.975 [499/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:03.975 [500/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:03.975 [501/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.975 [502/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:03.975 [503/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:04.236 [504/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:04.236 [505/710] Linking target lib/librte_efd.so.24.0 00:02:04.236 [506/710] Linking target lib/librte_lpm.so.24.0 00:02:04.236 [507/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:04.236 [508/710] Linking target lib/librte_member.so.24.0 00:02:04.236 [509/710] Linking target lib/librte_ipsec.so.24.0 00:02:04.236 [510/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:04.236 [511/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:04.236 [512/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:04.236 [513/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:04.236 [514/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:04.236 [515/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:04.236 [516/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:04.498 [517/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:04.498 [518/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:04.498 [519/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:04.498 [520/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:04.498 [521/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:04.498 [522/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:04.498 [523/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:04.498 [524/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:04.762 [525/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:05.026 [526/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:05.026 [527/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:05.026 [528/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:05.286 [529/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.286 [530/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:05.286 [531/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:05.286 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:05.551 [533/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:05.815 [534/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:05.815 [535/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:05.815 [536/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:05.815 [537/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:05.815 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:05.815 [539/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:05.815 [540/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:05.815 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:05.815 [542/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:05.815 [543/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:06.075 [544/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:06.075 [545/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:06.075 [546/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:06.075 [547/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:06.075 [548/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:06.075 [549/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:06.340 [550/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:06.340 [551/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:06.340 [552/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:06.340 [553/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:06.340 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:06.606 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:06.606 [556/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:06.606 [557/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:06.870 [558/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:06.870 [559/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:07.135 [560/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.398 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:07.398 [562/710] Linking target lib/librte_ethdev.so.24.0 00:02:07.398 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:07.398 [564/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:07.398 [565/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:07.398 [566/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:07.661 [567/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:07.661 [568/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:07.661 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:07.661 [570/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:07.661 [571/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:07.661 [572/710] Linking target lib/librte_metrics.so.24.0 00:02:07.661 [573/710] Linking target lib/librte_bpf.so.24.0 00:02:07.661 [574/710] Linking target lib/librte_gro.so.24.0 00:02:07.661 [575/710] Linking target lib/librte_eventdev.so.24.0 00:02:07.661 [576/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:07.661 [577/710] Linking target lib/librte_gso.so.24.0 00:02:07.921 [578/710] Linking target lib/librte_pcapng.so.24.0 00:02:07.921 [579/710] Linking target lib/librte_ip_frag.so.24.0 00:02:07.921 [580/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:07.921 [581/710] Linking target lib/librte_power.so.24.0 00:02:07.921 [582/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:07.921 [583/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:07.921 [584/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:07.921 [585/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:07.921 [586/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:07.921 [587/710] Linking target lib/librte_bitratestats.so.24.0 00:02:07.921 [588/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:07.922 [589/710] Linking target lib/librte_latencystats.so.24.0 00:02:07.922 [590/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:07.922 [591/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:07.922 [592/710] Linking target lib/librte_dispatcher.so.24.0 00:02:08.188 [593/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:08.188 [594/710] Linking target lib/librte_pdump.so.24.0 00:02:08.188 [595/710] Linking target lib/librte_graph.so.24.0 00:02:08.188 [596/710] Linking target lib/librte_port.so.24.0 00:02:08.188 [597/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:08.188 [598/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:08.188 [599/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:08.188 [600/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:08.449 [601/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:08.449 [602/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:08.449 [603/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:08.449 [604/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:08.449 [605/710] Linking static target lib/librte_pdcp.a 00:02:08.449 [606/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:08.449 [607/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:08.449 [608/710] Linking target lib/librte_table.so.24.0 00:02:08.449 [609/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:08.714 [610/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:08.714 [611/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:08.714 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:08.714 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:08.714 [614/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:08.977 [615/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:08.977 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:08.977 [617/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:08.977 [618/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.977 [619/710] Linking target lib/librte_pdcp.so.24.0 00:02:08.977 [620/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:09.237 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:09.237 [622/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:09.237 [623/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:09.237 [624/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:09.237 [625/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:09.502 [626/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:09.502 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:09.502 [628/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:09.502 [629/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:09.761 [630/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:10.020 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:10.020 [632/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:10.020 [633/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:10.020 [634/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:10.020 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:10.020 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:10.020 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:10.279 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:10.279 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:10.279 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:10.279 [641/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:10.279 [642/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:10.538 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:10.538 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:10.538 [645/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:10.538 [646/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:10.797 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:10.797 [648/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:10.797 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:11.056 [650/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:11.056 [651/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:11.056 [652/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:11.056 [653/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:11.056 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:11.056 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:11.314 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:11.572 [657/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:11.572 [658/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:11.572 [659/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.572 [660/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:11.572 [661/710] Linking static target drivers/librte_net_i40e.a 00:02:11.572 [662/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:11.572 [663/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:11.830 [664/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:12.116 [665/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:12.116 [666/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.116 [667/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:12.116 [668/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:12.116 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:02:12.375 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:12.634 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:12.894 [672/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:13.152 [673/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:13.152 [674/710] Linking static target lib/librte_node.a 00:02:13.410 [675/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.410 [676/710] Linking target lib/librte_node.so.24.0 00:02:14.344 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:14.630 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:14.630 [679/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:16.534 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:16.792 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:22.058 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:54.124 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:54.124 [684/710] Linking static target lib/librte_vhost.a 00:02:54.124 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.124 [686/710] Linking target lib/librte_vhost.so.24.0 00:03:04.100 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:04.100 [688/710] Linking static target lib/librte_pipeline.a 00:03:04.100 [689/710] Linking target app/dpdk-test-cmdline 00:03:04.100 [690/710] Linking target app/dpdk-test-acl 00:03:04.100 [691/710] Linking target app/dpdk-dumpcap 00:03:04.100 [692/710] Linking target app/dpdk-test-dma-perf 00:03:04.100 [693/710] Linking target app/dpdk-test-fib 00:03:04.100 [694/710] Linking target app/dpdk-pdump 00:03:04.100 [695/710] Linking target app/dpdk-test-sad 00:03:04.100 [696/710] Linking target app/dpdk-proc-info 00:03:04.100 [697/710] Linking target app/dpdk-test-gpudev 00:03:04.100 [698/710] Linking target app/dpdk-test-pipeline 00:03:04.100 [699/710] Linking target app/dpdk-test-regex 00:03:04.100 [700/710] Linking target app/dpdk-test-security-perf 00:03:04.100 [701/710] Linking target app/dpdk-graph 00:03:04.100 [702/710] Linking target app/dpdk-test-mldev 00:03:04.100 [703/710] Linking target app/dpdk-test-bbdev 00:03:04.100 [704/710] Linking target app/dpdk-test-flow-perf 00:03:04.100 [705/710] Linking target app/dpdk-test-crypto-perf 00:03:04.100 [706/710] Linking target app/dpdk-test-compress-perf 00:03:04.100 [707/710] Linking target app/dpdk-test-eventdev 00:03:04.359 [708/710] Linking target app/dpdk-testpmd 00:03:06.305 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.305 [710/710] Linking target lib/librte_pipeline.so.24.0 00:03:06.305 01:37:51 -- common/autobuild_common.sh@187 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:03:06.305 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:03:06.566 [0/1] Installing files. 00:03:06.829 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:03:06.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.829 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.830 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.831 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.832 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.833 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.834 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:03:06.834 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:06.834 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:07.779 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:07.779 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:07.779 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:07.779 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:03:07.779 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.779 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.780 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.780 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.780 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.780 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.780 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.781 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.782 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.783 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:03:07.783 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:03:07.783 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:03:07.783 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:03:07.783 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:03:07.784 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:03:07.784 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:03:07.784 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:03:07.784 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:03:07.784 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:03:07.784 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:03:07.784 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:03:07.784 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:03:07.784 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:03:07.784 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:03:07.784 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:03:07.784 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:03:07.784 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:03:07.784 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:03:07.784 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:03:07.784 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:03:07.784 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:03:07.784 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:03:07.784 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:03:07.784 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:03:07.784 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:03:07.784 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:03:07.784 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:03:07.784 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:03:07.784 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:03:07.784 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:03:07.784 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:03:07.784 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:03:07.784 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:03:07.784 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:03:07.784 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:03:07.784 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:03:07.784 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:03:07.784 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:03:07.784 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:03:07.784 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:03:07.784 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:03:07.784 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:03:07.784 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:03:07.784 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:03:07.784 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:03:07.784 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:03:07.784 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:03:07.784 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:03:07.784 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:03:07.784 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:03:07.784 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:03:07.784 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:03:07.784 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:03:07.784 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:03:07.784 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:03:07.784 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:03:07.784 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:03:07.784 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:03:07.784 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:03:07.784 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:03:07.784 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:03:07.784 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:03:07.784 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:03:07.784 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:03:07.784 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:03:07.784 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:03:07.784 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:03:07.784 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:03:07.784 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:03:07.784 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:03:07.784 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:03:07.784 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:03:07.784 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:03:07.784 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:03:07.784 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:03:07.784 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:03:07.784 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:07.784 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:07.784 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:07.784 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:07.784 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:07.784 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:07.784 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:07.784 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:07.784 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:07.784 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:07.784 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:07.784 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:07.784 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:03:07.784 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:03:07.785 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:03:07.785 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:03:07.785 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:03:07.785 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:03:07.785 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:03:07.785 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:03:07.785 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:03:07.785 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:03:07.785 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:03:07.785 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:03:07.785 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:03:07.785 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:03:07.785 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:03:07.785 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:03:07.785 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:03:07.785 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:03:07.785 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:03:07.785 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:03:07.785 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:03:07.785 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:03:07.785 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:03:07.785 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:03:07.785 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:03:07.785 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:03:07.785 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:03:07.785 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:03:07.785 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:03:07.785 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:03:07.785 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:03:07.785 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:03:07.785 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:03:07.785 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:03:07.785 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:03:07.785 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:03:07.785 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:07.785 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:07.785 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:07.785 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:07.785 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:07.785 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:07.785 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:07.785 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:07.785 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:07.785 01:37:53 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:07.785 01:37:53 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:07.785 01:37:53 -- common/autobuild_common.sh@200 -- $ cat 00:03:07.785 01:37:53 -- common/autobuild_common.sh@205 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:07.785 00:03:07.785 real 1m24.711s 00:03:07.785 user 17m50.525s 00:03:07.785 sys 2m8.022s 00:03:07.785 01:37:53 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:07.785 01:37:53 -- common/autotest_common.sh@10 -- $ set +x 00:03:07.785 ************************************ 00:03:07.785 END TEST build_native_dpdk 00:03:07.785 ************************************ 00:03:07.785 01:37:53 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:07.785 01:37:53 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:07.785 01:37:53 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:03:07.785 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:03:08.046 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:03:08.046 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:03:08.046 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:08.306 Using 'verbs' RDMA provider 00:03:18.549 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:03:28.532 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:28.532 Creating mk/config.mk...done. 00:03:28.532 Creating mk/cc.flags.mk...done. 00:03:28.532 Type 'make' to build. 00:03:28.532 01:38:12 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:28.532 01:38:12 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:28.532 01:38:12 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:28.532 01:38:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:28.532 ************************************ 00:03:28.532 START TEST make 00:03:28.532 ************************************ 00:03:28.532 01:38:12 -- common/autotest_common.sh@1104 -- $ make -j48 00:03:28.532 make[1]: Nothing to be done for 'all'. 00:03:28.796 The Meson build system 00:03:28.796 Version: 1.3.1 00:03:28.796 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:28.796 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:28.796 Build type: native build 00:03:28.796 Project name: libvfio-user 00:03:28.796 Project version: 0.0.1 00:03:28.796 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:28.796 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:28.796 Host machine cpu family: x86_64 00:03:28.796 Host machine cpu: x86_64 00:03:28.796 Run-time dependency threads found: YES 00:03:28.796 Library dl found: YES 00:03:28.796 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:28.796 Run-time dependency json-c found: YES 0.17 00:03:28.796 Run-time dependency cmocka found: YES 1.1.7 00:03:28.796 Program pytest-3 found: NO 00:03:28.796 Program flake8 found: NO 00:03:28.796 Program misspell-fixer found: NO 00:03:28.796 Program restructuredtext-lint found: NO 00:03:28.796 Program valgrind found: YES (/usr/bin/valgrind) 00:03:28.796 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:28.796 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:28.796 Compiler for C supports arguments -Wwrite-strings: YES 00:03:28.796 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:28.796 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:28.796 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:28.796 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:28.796 Build targets in project: 8 00:03:28.796 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:28.796 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:28.796 00:03:28.796 libvfio-user 0.0.1 00:03:28.796 00:03:28.796 User defined options 00:03:28.796 buildtype : debug 00:03:28.796 default_library: shared 00:03:28.796 libdir : /usr/local/lib 00:03:28.796 00:03:28.796 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:29.372 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:29.636 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:29.636 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:29.636 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:29.636 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:29.636 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:29.636 [6/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:29.636 [7/37] Compiling C object samples/null.p/null.c.o 00:03:29.899 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:29.899 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:29.899 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:29.899 [11/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:29.899 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:29.899 [13/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:29.899 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:29.899 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:29.899 [16/37] Compiling C object samples/server.p/server.c.o 00:03:29.899 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:29.899 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:29.899 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:29.899 [20/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:29.899 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:29.899 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:29.899 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:29.899 [24/37] Compiling C object samples/client.p/client.c.o 00:03:29.899 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:29.899 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:29.900 [27/37] Linking target samples/client 00:03:29.900 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:29.900 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:30.160 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:30.160 [31/37] Linking target test/unit_tests 00:03:30.160 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:30.160 [33/37] Linking target samples/null 00:03:30.160 [34/37] Linking target samples/server 00:03:30.160 [35/37] Linking target samples/lspci 00:03:30.160 [36/37] Linking target samples/gpio-pci-idio-16 00:03:30.424 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:30.424 INFO: autodetecting backend as ninja 00:03:30.424 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.424 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:30.996 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:30.996 ninja: no work to do. 00:03:43.233 CC lib/ut_mock/mock.o 00:03:43.233 CC lib/ut/ut.o 00:03:43.233 CC lib/log/log.o 00:03:43.233 CC lib/log/log_flags.o 00:03:43.233 CC lib/log/log_deprecated.o 00:03:43.233 LIB libspdk_ut_mock.a 00:03:43.233 SO libspdk_ut_mock.so.5.0 00:03:43.233 LIB libspdk_ut.a 00:03:43.233 LIB libspdk_log.a 00:03:43.233 SO libspdk_ut.so.1.0 00:03:43.233 SO libspdk_log.so.6.1 00:03:43.233 SYMLINK libspdk_ut_mock.so 00:03:43.233 SYMLINK libspdk_ut.so 00:03:43.233 SYMLINK libspdk_log.so 00:03:43.233 CC lib/ioat/ioat.o 00:03:43.233 CC lib/util/base64.o 00:03:43.233 CXX lib/trace_parser/trace.o 00:03:43.233 CC lib/util/bit_array.o 00:03:43.233 CC lib/util/cpuset.o 00:03:43.233 CC lib/util/crc16.o 00:03:43.233 CC lib/util/crc32.o 00:03:43.233 CC lib/dma/dma.o 00:03:43.233 CC lib/util/crc32c.o 00:03:43.233 CC lib/util/crc32_ieee.o 00:03:43.233 CC lib/util/crc64.o 00:03:43.233 CC lib/util/dif.o 00:03:43.233 CC lib/util/fd.o 00:03:43.233 CC lib/util/file.o 00:03:43.233 CC lib/util/hexlify.o 00:03:43.233 CC lib/util/iov.o 00:03:43.233 CC lib/util/math.o 00:03:43.233 CC lib/util/pipe.o 00:03:43.233 CC lib/util/strerror_tls.o 00:03:43.233 CC lib/util/string.o 00:03:43.233 CC lib/util/uuid.o 00:03:43.233 CC lib/util/fd_group.o 00:03:43.233 CC lib/util/xor.o 00:03:43.233 CC lib/util/zipf.o 00:03:43.233 CC lib/vfio_user/host/vfio_user_pci.o 00:03:43.233 CC lib/vfio_user/host/vfio_user.o 00:03:43.233 LIB libspdk_dma.a 00:03:43.233 SO libspdk_dma.so.3.0 00:03:43.233 SYMLINK libspdk_dma.so 00:03:43.233 LIB libspdk_ioat.a 00:03:43.233 SO libspdk_ioat.so.6.0 00:03:43.233 LIB libspdk_vfio_user.a 00:03:43.233 SO libspdk_vfio_user.so.4.0 00:03:43.233 SYMLINK libspdk_ioat.so 00:03:43.233 SYMLINK libspdk_vfio_user.so 00:03:43.491 LIB libspdk_util.a 00:03:43.491 SO libspdk_util.so.8.0 00:03:43.491 SYMLINK libspdk_util.so 00:03:43.751 CC lib/json/json_parse.o 00:03:43.751 CC lib/vmd/vmd.o 00:03:43.751 CC lib/env_dpdk/env.o 00:03:43.751 CC lib/conf/conf.o 00:03:43.751 CC lib/rdma/common.o 00:03:43.751 CC lib/idxd/idxd.o 00:03:43.751 CC lib/json/json_util.o 00:03:43.751 CC lib/env_dpdk/memory.o 00:03:43.751 CC lib/rdma/rdma_verbs.o 00:03:43.751 CC lib/vmd/led.o 00:03:43.751 CC lib/idxd/idxd_user.o 00:03:43.751 CC lib/json/json_write.o 00:03:43.751 CC lib/env_dpdk/pci.o 00:03:43.751 CC lib/env_dpdk/init.o 00:03:43.751 CC lib/env_dpdk/threads.o 00:03:43.751 CC lib/env_dpdk/pci_ioat.o 00:03:43.751 CC lib/env_dpdk/pci_virtio.o 00:03:43.751 CC lib/env_dpdk/pci_vmd.o 00:03:43.751 CC lib/env_dpdk/pci_idxd.o 00:03:43.751 CC lib/env_dpdk/pci_event.o 00:03:43.751 CC lib/env_dpdk/sigbus_handler.o 00:03:43.751 CC lib/env_dpdk/pci_dpdk.o 00:03:43.751 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:43.751 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:43.751 LIB libspdk_trace_parser.a 00:03:43.751 SO libspdk_trace_parser.so.4.0 00:03:43.751 SYMLINK libspdk_trace_parser.so 00:03:44.009 LIB libspdk_conf.a 00:03:44.009 SO libspdk_conf.so.5.0 00:03:44.009 LIB libspdk_json.a 00:03:44.009 LIB libspdk_rdma.a 00:03:44.009 SYMLINK libspdk_conf.so 00:03:44.009 SO libspdk_rdma.so.5.0 00:03:44.009 SO libspdk_json.so.5.1 00:03:44.009 SYMLINK libspdk_rdma.so 00:03:44.268 SYMLINK libspdk_json.so 00:03:44.268 CC lib/jsonrpc/jsonrpc_server.o 00:03:44.268 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:44.268 CC lib/jsonrpc/jsonrpc_client.o 00:03:44.268 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:44.268 LIB libspdk_idxd.a 00:03:44.268 SO libspdk_idxd.so.11.0 00:03:44.268 SYMLINK libspdk_idxd.so 00:03:44.527 LIB libspdk_vmd.a 00:03:44.527 SO libspdk_vmd.so.5.0 00:03:44.527 SYMLINK libspdk_vmd.so 00:03:44.527 LIB libspdk_jsonrpc.a 00:03:44.527 SO libspdk_jsonrpc.so.5.1 00:03:44.527 SYMLINK libspdk_jsonrpc.so 00:03:44.787 CC lib/rpc/rpc.o 00:03:44.787 LIB libspdk_rpc.a 00:03:45.045 SO libspdk_rpc.so.5.0 00:03:45.045 SYMLINK libspdk_rpc.so 00:03:45.045 CC lib/trace/trace.o 00:03:45.045 CC lib/trace/trace_flags.o 00:03:45.045 CC lib/notify/notify.o 00:03:45.045 CC lib/trace/trace_rpc.o 00:03:45.045 CC lib/notify/notify_rpc.o 00:03:45.045 CC lib/sock/sock.o 00:03:45.045 CC lib/sock/sock_rpc.o 00:03:45.304 LIB libspdk_notify.a 00:03:45.304 SO libspdk_notify.so.5.0 00:03:45.304 LIB libspdk_trace.a 00:03:45.304 SYMLINK libspdk_notify.so 00:03:45.304 SO libspdk_trace.so.9.0 00:03:45.304 SYMLINK libspdk_trace.so 00:03:45.562 LIB libspdk_sock.a 00:03:45.562 SO libspdk_sock.so.8.0 00:03:45.562 CC lib/thread/thread.o 00:03:45.562 CC lib/thread/iobuf.o 00:03:45.562 SYMLINK libspdk_sock.so 00:03:45.562 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:45.562 CC lib/nvme/nvme_ctrlr.o 00:03:45.562 CC lib/nvme/nvme_fabric.o 00:03:45.562 CC lib/nvme/nvme_ns_cmd.o 00:03:45.562 CC lib/nvme/nvme_ns.o 00:03:45.562 CC lib/nvme/nvme_pcie_common.o 00:03:45.562 CC lib/nvme/nvme_pcie.o 00:03:45.562 CC lib/nvme/nvme_qpair.o 00:03:45.562 CC lib/nvme/nvme.o 00:03:45.562 CC lib/nvme/nvme_quirks.o 00:03:45.562 CC lib/nvme/nvme_transport.o 00:03:45.562 CC lib/nvme/nvme_discovery.o 00:03:45.562 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:45.562 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:45.562 CC lib/nvme/nvme_tcp.o 00:03:45.562 CC lib/nvme/nvme_opal.o 00:03:45.562 CC lib/nvme/nvme_io_msg.o 00:03:45.562 CC lib/nvme/nvme_poll_group.o 00:03:45.562 CC lib/nvme/nvme_zns.o 00:03:45.562 CC lib/nvme/nvme_cuse.o 00:03:45.562 CC lib/nvme/nvme_vfio_user.o 00:03:45.562 CC lib/nvme/nvme_rdma.o 00:03:45.821 LIB libspdk_env_dpdk.a 00:03:45.821 SO libspdk_env_dpdk.so.13.0 00:03:46.079 SYMLINK libspdk_env_dpdk.so 00:03:47.014 LIB libspdk_thread.a 00:03:47.014 SO libspdk_thread.so.9.0 00:03:47.273 SYMLINK libspdk_thread.so 00:03:47.273 CC lib/init/json_config.o 00:03:47.273 CC lib/virtio/virtio.o 00:03:47.273 CC lib/accel/accel.o 00:03:47.273 CC lib/init/subsystem.o 00:03:47.273 CC lib/accel/accel_rpc.o 00:03:47.273 CC lib/virtio/virtio_vhost_user.o 00:03:47.273 CC lib/vfu_tgt/tgt_endpoint.o 00:03:47.273 CC lib/init/subsystem_rpc.o 00:03:47.273 CC lib/blob/blobstore.o 00:03:47.273 CC lib/accel/accel_sw.o 00:03:47.273 CC lib/virtio/virtio_vfio_user.o 00:03:47.273 CC lib/vfu_tgt/tgt_rpc.o 00:03:47.273 CC lib/init/rpc.o 00:03:47.273 CC lib/virtio/virtio_pci.o 00:03:47.273 CC lib/blob/request.o 00:03:47.273 CC lib/blob/zeroes.o 00:03:47.273 CC lib/blob/blob_bs_dev.o 00:03:47.531 LIB libspdk_init.a 00:03:47.531 SO libspdk_init.so.4.0 00:03:47.789 LIB libspdk_virtio.a 00:03:47.789 LIB libspdk_vfu_tgt.a 00:03:47.789 SYMLINK libspdk_init.so 00:03:47.789 SO libspdk_vfu_tgt.so.2.0 00:03:47.789 SO libspdk_virtio.so.6.0 00:03:47.789 SYMLINK libspdk_vfu_tgt.so 00:03:47.789 SYMLINK libspdk_virtio.so 00:03:47.789 CC lib/event/app.o 00:03:47.789 CC lib/event/reactor.o 00:03:47.789 CC lib/event/log_rpc.o 00:03:47.789 CC lib/event/app_rpc.o 00:03:47.789 CC lib/event/scheduler_static.o 00:03:48.047 LIB libspdk_nvme.a 00:03:48.047 SO libspdk_nvme.so.12.0 00:03:48.306 LIB libspdk_event.a 00:03:48.306 SO libspdk_event.so.12.0 00:03:48.306 SYMLINK libspdk_event.so 00:03:48.306 LIB libspdk_accel.a 00:03:48.306 SO libspdk_accel.so.14.0 00:03:48.306 SYMLINK libspdk_nvme.so 00:03:48.566 SYMLINK libspdk_accel.so 00:03:48.566 CC lib/bdev/bdev.o 00:03:48.566 CC lib/bdev/bdev_rpc.o 00:03:48.566 CC lib/bdev/bdev_zone.o 00:03:48.566 CC lib/bdev/part.o 00:03:48.566 CC lib/bdev/scsi_nvme.o 00:03:49.940 LIB libspdk_blob.a 00:03:49.940 SO libspdk_blob.so.10.1 00:03:50.198 SYMLINK libspdk_blob.so 00:03:50.198 CC lib/blobfs/blobfs.o 00:03:50.198 CC lib/blobfs/tree.o 00:03:50.198 CC lib/lvol/lvol.o 00:03:51.133 LIB libspdk_blobfs.a 00:03:51.133 SO libspdk_blobfs.so.9.0 00:03:51.133 LIB libspdk_bdev.a 00:03:51.133 LIB libspdk_lvol.a 00:03:51.133 SO libspdk_bdev.so.14.0 00:03:51.133 SYMLINK libspdk_blobfs.so 00:03:51.133 SO libspdk_lvol.so.9.1 00:03:51.133 SYMLINK libspdk_lvol.so 00:03:51.133 SYMLINK libspdk_bdev.so 00:03:51.402 CC lib/scsi/dev.o 00:03:51.402 CC lib/ublk/ublk.o 00:03:51.402 CC lib/nvmf/ctrlr.o 00:03:51.402 CC lib/scsi/lun.o 00:03:51.402 CC lib/nvmf/ctrlr_discovery.o 00:03:51.402 CC lib/nbd/nbd.o 00:03:51.402 CC lib/ublk/ublk_rpc.o 00:03:51.402 CC lib/ftl/ftl_core.o 00:03:51.402 CC lib/scsi/port.o 00:03:51.402 CC lib/nvmf/ctrlr_bdev.o 00:03:51.402 CC lib/nbd/nbd_rpc.o 00:03:51.402 CC lib/ftl/ftl_init.o 00:03:51.402 CC lib/nvmf/subsystem.o 00:03:51.402 CC lib/ftl/ftl_layout.o 00:03:51.402 CC lib/nvmf/nvmf.o 00:03:51.402 CC lib/scsi/scsi.o 00:03:51.402 CC lib/ftl/ftl_debug.o 00:03:51.402 CC lib/ftl/ftl_io.o 00:03:51.402 CC lib/scsi/scsi_bdev.o 00:03:51.402 CC lib/nvmf/nvmf_rpc.o 00:03:51.402 CC lib/nvmf/transport.o 00:03:51.402 CC lib/scsi/scsi_pr.o 00:03:51.402 CC lib/ftl/ftl_sb.o 00:03:51.402 CC lib/ftl/ftl_l2p.o 00:03:51.402 CC lib/scsi/scsi_rpc.o 00:03:51.402 CC lib/nvmf/tcp.o 00:03:51.402 CC lib/nvmf/vfio_user.o 00:03:51.402 CC lib/ftl/ftl_l2p_flat.o 00:03:51.402 CC lib/scsi/task.o 00:03:51.402 CC lib/ftl/ftl_nv_cache.o 00:03:51.402 CC lib/nvmf/rdma.o 00:03:51.402 CC lib/ftl/ftl_band.o 00:03:51.402 CC lib/ftl/ftl_band_ops.o 00:03:51.402 CC lib/ftl/ftl_writer.o 00:03:51.402 CC lib/ftl/ftl_rq.o 00:03:51.402 CC lib/ftl/ftl_l2p_cache.o 00:03:51.402 CC lib/ftl/ftl_reloc.o 00:03:51.402 CC lib/ftl/ftl_p2l.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.402 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.661 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.661 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.661 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.661 CC lib/ftl/utils/ftl_conf.o 00:03:51.661 CC lib/ftl/utils/ftl_md.o 00:03:51.661 CC lib/ftl/utils/ftl_mempool.o 00:03:51.661 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.661 CC lib/ftl/utils/ftl_property.o 00:03:51.661 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.661 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.661 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.661 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.661 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.661 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.661 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.661 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.661 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.920 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.920 CC lib/ftl/base/ftl_base_dev.o 00:03:51.920 CC lib/ftl/base/ftl_base_bdev.o 00:03:51.920 CC lib/ftl/ftl_trace.o 00:03:52.178 LIB libspdk_nbd.a 00:03:52.178 SO libspdk_nbd.so.6.0 00:03:52.178 LIB libspdk_scsi.a 00:03:52.178 SYMLINK libspdk_nbd.so 00:03:52.178 SO libspdk_scsi.so.8.0 00:03:52.178 SYMLINK libspdk_scsi.so 00:03:52.178 LIB libspdk_ublk.a 00:03:52.437 SO libspdk_ublk.so.2.0 00:03:52.437 CC lib/iscsi/conn.o 00:03:52.437 CC lib/vhost/vhost.o 00:03:52.437 CC lib/iscsi/init_grp.o 00:03:52.437 CC lib/vhost/vhost_rpc.o 00:03:52.437 CC lib/iscsi/iscsi.o 00:03:52.437 CC lib/vhost/vhost_scsi.o 00:03:52.437 CC lib/iscsi/md5.o 00:03:52.437 CC lib/vhost/vhost_blk.o 00:03:52.437 CC lib/vhost/rte_vhost_user.o 00:03:52.437 CC lib/iscsi/param.o 00:03:52.437 CC lib/iscsi/portal_grp.o 00:03:52.437 CC lib/iscsi/tgt_node.o 00:03:52.437 CC lib/iscsi/iscsi_subsystem.o 00:03:52.437 CC lib/iscsi/task.o 00:03:52.437 CC lib/iscsi/iscsi_rpc.o 00:03:52.437 SYMLINK libspdk_ublk.so 00:03:52.695 LIB libspdk_ftl.a 00:03:52.953 SO libspdk_ftl.so.8.0 00:03:53.211 SYMLINK libspdk_ftl.so 00:03:53.776 LIB libspdk_vhost.a 00:03:53.776 SO libspdk_vhost.so.7.1 00:03:53.776 SYMLINK libspdk_vhost.so 00:03:53.776 LIB libspdk_nvmf.a 00:03:53.776 LIB libspdk_iscsi.a 00:03:53.776 SO libspdk_nvmf.so.17.0 00:03:54.038 SO libspdk_iscsi.so.7.0 00:03:54.038 SYMLINK libspdk_iscsi.so 00:03:54.038 SYMLINK libspdk_nvmf.so 00:03:54.331 CC module/env_dpdk/env_dpdk_rpc.o 00:03:54.331 CC module/vfu_device/vfu_virtio.o 00:03:54.331 CC module/vfu_device/vfu_virtio_blk.o 00:03:54.331 CC module/vfu_device/vfu_virtio_scsi.o 00:03:54.331 CC module/vfu_device/vfu_virtio_rpc.o 00:03:54.331 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.331 CC module/sock/posix/posix.o 00:03:54.331 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.331 CC module/accel/error/accel_error.o 00:03:54.331 CC module/accel/ioat/accel_ioat.o 00:03:54.331 CC module/accel/error/accel_error_rpc.o 00:03:54.331 CC module/accel/dsa/accel_dsa.o 00:03:54.331 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.331 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.331 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.331 CC module/blob/bdev/blob_bdev.o 00:03:54.331 CC module/accel/iaa/accel_iaa.o 00:03:54.331 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.331 LIB libspdk_env_dpdk_rpc.a 00:03:54.331 SO libspdk_env_dpdk_rpc.so.5.0 00:03:54.589 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.589 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.589 LIB libspdk_scheduler_gscheduler.a 00:03:54.589 SO libspdk_scheduler_gscheduler.so.3.0 00:03:54.589 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:54.589 LIB libspdk_accel_error.a 00:03:54.589 LIB libspdk_accel_ioat.a 00:03:54.589 LIB libspdk_scheduler_dynamic.a 00:03:54.589 LIB libspdk_accel_iaa.a 00:03:54.589 SO libspdk_accel_error.so.1.0 00:03:54.589 SO libspdk_accel_ioat.so.5.0 00:03:54.589 SO libspdk_scheduler_dynamic.so.3.0 00:03:54.589 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.589 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.589 SO libspdk_accel_iaa.so.2.0 00:03:54.589 LIB libspdk_accel_dsa.a 00:03:54.589 SYMLINK libspdk_accel_error.so 00:03:54.589 SO libspdk_accel_dsa.so.4.0 00:03:54.589 LIB libspdk_blob_bdev.a 00:03:54.589 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.589 SYMLINK libspdk_accel_ioat.so 00:03:54.589 SYMLINK libspdk_accel_iaa.so 00:03:54.589 SO libspdk_blob_bdev.so.10.1 00:03:54.589 SYMLINK libspdk_accel_dsa.so 00:03:54.589 SYMLINK libspdk_blob_bdev.so 00:03:54.848 CC module/bdev/gpt/gpt.o 00:03:54.848 CC module/bdev/null/bdev_null.o 00:03:54.848 CC module/bdev/malloc/bdev_malloc.o 00:03:54.848 CC module/bdev/passthru/vbdev_passthru.o 00:03:54.848 CC module/bdev/null/bdev_null_rpc.o 00:03:54.848 CC module/bdev/delay/vbdev_delay.o 00:03:54.848 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.848 CC module/bdev/aio/bdev_aio.o 00:03:54.848 CC module/bdev/raid/bdev_raid.o 00:03:54.848 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.848 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:54.848 CC module/blobfs/bdev/blobfs_bdev.o 00:03:54.848 CC module/bdev/nvme/bdev_nvme.o 00:03:54.848 CC module/bdev/aio/bdev_aio_rpc.o 00:03:54.848 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:54.848 CC module/bdev/raid/bdev_raid_sb.o 00:03:54.848 CC module/bdev/raid/bdev_raid_rpc.o 00:03:54.848 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:54.848 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:54.848 CC module/bdev/split/vbdev_split.o 00:03:54.848 CC module/bdev/split/vbdev_split_rpc.o 00:03:54.848 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:54.848 CC module/bdev/lvol/vbdev_lvol.o 00:03:54.848 CC module/bdev/error/vbdev_error_rpc.o 00:03:54.848 CC module/bdev/error/vbdev_error.o 00:03:54.848 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:54.848 CC module/bdev/raid/raid0.o 00:03:54.848 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:54.848 CC module/bdev/nvme/nvme_rpc.o 00:03:54.848 CC module/bdev/raid/raid1.o 00:03:54.848 CC module/bdev/iscsi/bdev_iscsi.o 00:03:54.848 CC module/bdev/nvme/bdev_mdns_client.o 00:03:54.848 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:54.848 CC module/bdev/raid/concat.o 00:03:54.848 CC module/bdev/nvme/vbdev_opal.o 00:03:54.848 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:54.848 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:54.848 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:54.848 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:54.848 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:54.848 CC module/bdev/ftl/bdev_ftl.o 00:03:54.848 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.106 LIB libspdk_vfu_device.a 00:03:55.106 SO libspdk_vfu_device.so.2.0 00:03:55.106 SYMLINK libspdk_vfu_device.so 00:03:55.106 LIB libspdk_sock_posix.a 00:03:55.363 SO libspdk_sock_posix.so.5.0 00:03:55.363 LIB libspdk_blobfs_bdev.a 00:03:55.363 LIB libspdk_bdev_passthru.a 00:03:55.363 LIB libspdk_bdev_aio.a 00:03:55.363 SO libspdk_blobfs_bdev.so.5.0 00:03:55.363 SO libspdk_bdev_passthru.so.5.0 00:03:55.363 SO libspdk_bdev_aio.so.5.0 00:03:55.363 SYMLINK libspdk_sock_posix.so 00:03:55.363 LIB libspdk_bdev_error.a 00:03:55.363 LIB libspdk_bdev_split.a 00:03:55.363 SYMLINK libspdk_blobfs_bdev.so 00:03:55.363 LIB libspdk_bdev_null.a 00:03:55.363 SO libspdk_bdev_error.so.5.0 00:03:55.363 SYMLINK libspdk_bdev_passthru.so 00:03:55.363 SYMLINK libspdk_bdev_aio.so 00:03:55.363 SO libspdk_bdev_split.so.5.0 00:03:55.363 LIB libspdk_bdev_gpt.a 00:03:55.363 SO libspdk_bdev_null.so.5.0 00:03:55.363 LIB libspdk_bdev_zone_block.a 00:03:55.363 SO libspdk_bdev_gpt.so.5.0 00:03:55.363 SYMLINK libspdk_bdev_error.so 00:03:55.363 LIB libspdk_bdev_ftl.a 00:03:55.363 SO libspdk_bdev_zone_block.so.5.0 00:03:55.363 SYMLINK libspdk_bdev_split.so 00:03:55.363 SYMLINK libspdk_bdev_null.so 00:03:55.363 SO libspdk_bdev_ftl.so.5.0 00:03:55.363 LIB libspdk_bdev_delay.a 00:03:55.363 SYMLINK libspdk_bdev_gpt.so 00:03:55.363 SO libspdk_bdev_delay.so.5.0 00:03:55.363 LIB libspdk_bdev_malloc.a 00:03:55.363 SYMLINK libspdk_bdev_zone_block.so 00:03:55.620 LIB libspdk_bdev_iscsi.a 00:03:55.620 SYMLINK libspdk_bdev_ftl.so 00:03:55.620 SO libspdk_bdev_malloc.so.5.0 00:03:55.620 SO libspdk_bdev_iscsi.so.5.0 00:03:55.620 SYMLINK libspdk_bdev_delay.so 00:03:55.620 SYMLINK libspdk_bdev_malloc.so 00:03:55.620 SYMLINK libspdk_bdev_iscsi.so 00:03:55.620 LIB libspdk_bdev_lvol.a 00:03:55.620 SO libspdk_bdev_lvol.so.5.0 00:03:55.620 LIB libspdk_bdev_virtio.a 00:03:55.620 SO libspdk_bdev_virtio.so.5.0 00:03:55.620 SYMLINK libspdk_bdev_lvol.so 00:03:55.620 SYMLINK libspdk_bdev_virtio.so 00:03:55.879 LIB libspdk_bdev_raid.a 00:03:56.137 SO libspdk_bdev_raid.so.5.0 00:03:56.137 SYMLINK libspdk_bdev_raid.so 00:03:57.072 LIB libspdk_bdev_nvme.a 00:03:57.330 SO libspdk_bdev_nvme.so.6.0 00:03:57.330 SYMLINK libspdk_bdev_nvme.so 00:03:57.588 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.588 CC module/event/subsystems/vmd/vmd.o 00:03:57.588 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.588 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.588 CC module/event/subsystems/sock/sock.o 00:03:57.588 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.588 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.588 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:57.588 LIB libspdk_event_sock.a 00:03:57.588 LIB libspdk_event_vhost_blk.a 00:03:57.588 LIB libspdk_event_scheduler.a 00:03:57.588 LIB libspdk_event_vmd.a 00:03:57.588 LIB libspdk_event_vfu_tgt.a 00:03:57.588 LIB libspdk_event_iobuf.a 00:03:57.588 SO libspdk_event_sock.so.4.0 00:03:57.846 SO libspdk_event_scheduler.so.3.0 00:03:57.846 SO libspdk_event_vhost_blk.so.2.0 00:03:57.846 SO libspdk_event_vfu_tgt.so.2.0 00:03:57.846 SO libspdk_event_vmd.so.5.0 00:03:57.846 SO libspdk_event_iobuf.so.2.0 00:03:57.846 SYMLINK libspdk_event_sock.so 00:03:57.846 SYMLINK libspdk_event_scheduler.so 00:03:57.846 SYMLINK libspdk_event_vhost_blk.so 00:03:57.846 SYMLINK libspdk_event_vfu_tgt.so 00:03:57.846 SYMLINK libspdk_event_vmd.so 00:03:57.846 SYMLINK libspdk_event_iobuf.so 00:03:57.846 CC module/event/subsystems/accel/accel.o 00:03:58.104 LIB libspdk_event_accel.a 00:03:58.105 SO libspdk_event_accel.so.5.0 00:03:58.105 SYMLINK libspdk_event_accel.so 00:03:58.363 CC module/event/subsystems/bdev/bdev.o 00:03:58.363 LIB libspdk_event_bdev.a 00:03:58.363 SO libspdk_event_bdev.so.5.0 00:03:58.363 SYMLINK libspdk_event_bdev.so 00:03:58.620 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:58.620 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:58.620 CC module/event/subsystems/scsi/scsi.o 00:03:58.620 CC module/event/subsystems/nbd/nbd.o 00:03:58.620 CC module/event/subsystems/ublk/ublk.o 00:03:58.620 LIB libspdk_event_nbd.a 00:03:58.620 LIB libspdk_event_ublk.a 00:03:58.620 LIB libspdk_event_scsi.a 00:03:58.878 SO libspdk_event_nbd.so.5.0 00:03:58.878 SO libspdk_event_ublk.so.2.0 00:03:58.878 SO libspdk_event_scsi.so.5.0 00:03:58.878 SYMLINK libspdk_event_nbd.so 00:03:58.878 SYMLINK libspdk_event_ublk.so 00:03:58.878 LIB libspdk_event_nvmf.a 00:03:58.878 SYMLINK libspdk_event_scsi.so 00:03:58.878 SO libspdk_event_nvmf.so.5.0 00:03:58.878 SYMLINK libspdk_event_nvmf.so 00:03:58.878 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:58.878 CC module/event/subsystems/iscsi/iscsi.o 00:03:59.136 LIB libspdk_event_vhost_scsi.a 00:03:59.136 LIB libspdk_event_iscsi.a 00:03:59.136 SO libspdk_event_vhost_scsi.so.2.0 00:03:59.136 SO libspdk_event_iscsi.so.5.0 00:03:59.136 SYMLINK libspdk_event_vhost_scsi.so 00:03:59.136 SYMLINK libspdk_event_iscsi.so 00:03:59.136 SO libspdk.so.5.0 00:03:59.136 SYMLINK libspdk.so 00:03:59.397 CC app/trace_record/trace_record.o 00:03:59.397 CXX app/trace/trace.o 00:03:59.397 CC app/spdk_lspci/spdk_lspci.o 00:03:59.397 TEST_HEADER include/spdk/accel.h 00:03:59.397 CC app/spdk_nvme_identify/identify.o 00:03:59.397 CC app/spdk_nvme_discover/discovery_aer.o 00:03:59.397 TEST_HEADER include/spdk/accel_module.h 00:03:59.397 CC app/spdk_top/spdk_top.o 00:03:59.397 TEST_HEADER include/spdk/assert.h 00:03:59.397 CC test/rpc_client/rpc_client_test.o 00:03:59.397 TEST_HEADER include/spdk/barrier.h 00:03:59.397 CC app/spdk_nvme_perf/perf.o 00:03:59.397 TEST_HEADER include/spdk/base64.h 00:03:59.397 TEST_HEADER include/spdk/bdev.h 00:03:59.397 TEST_HEADER include/spdk/bdev_module.h 00:03:59.397 TEST_HEADER include/spdk/bdev_zone.h 00:03:59.397 TEST_HEADER include/spdk/bit_array.h 00:03:59.397 TEST_HEADER include/spdk/bit_pool.h 00:03:59.397 TEST_HEADER include/spdk/blob_bdev.h 00:03:59.397 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:59.397 TEST_HEADER include/spdk/blobfs.h 00:03:59.397 TEST_HEADER include/spdk/blob.h 00:03:59.397 TEST_HEADER include/spdk/conf.h 00:03:59.397 TEST_HEADER include/spdk/config.h 00:03:59.397 TEST_HEADER include/spdk/cpuset.h 00:03:59.397 TEST_HEADER include/spdk/crc16.h 00:03:59.397 TEST_HEADER include/spdk/crc32.h 00:03:59.397 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:59.397 TEST_HEADER include/spdk/crc64.h 00:03:59.397 TEST_HEADER include/spdk/dif.h 00:03:59.397 CC app/spdk_dd/spdk_dd.o 00:03:59.397 TEST_HEADER include/spdk/dma.h 00:03:59.397 TEST_HEADER include/spdk/endian.h 00:03:59.397 CC app/nvmf_tgt/nvmf_main.o 00:03:59.397 TEST_HEADER include/spdk/env_dpdk.h 00:03:59.397 CC app/iscsi_tgt/iscsi_tgt.o 00:03:59.397 TEST_HEADER include/spdk/env.h 00:03:59.397 CC app/vhost/vhost.o 00:03:59.397 TEST_HEADER include/spdk/event.h 00:03:59.397 TEST_HEADER include/spdk/fd_group.h 00:03:59.397 CC examples/nvme/reconnect/reconnect.o 00:03:59.397 TEST_HEADER include/spdk/fd.h 00:03:59.397 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:59.397 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:59.397 CC examples/nvme/hello_world/hello_world.o 00:03:59.397 CC examples/nvme/hotplug/hotplug.o 00:03:59.397 CC examples/util/zipf/zipf.o 00:03:59.397 TEST_HEADER include/spdk/file.h 00:03:59.397 CC test/nvme/aer/aer.o 00:03:59.397 CC examples/accel/perf/accel_perf.o 00:03:59.397 CC examples/nvme/arbitration/arbitration.o 00:03:59.397 TEST_HEADER include/spdk/ftl.h 00:03:59.397 CC examples/ioat/perf/perf.o 00:03:59.397 CC app/fio/nvme/fio_plugin.o 00:03:59.397 CC examples/sock/hello_world/hello_sock.o 00:03:59.397 TEST_HEADER include/spdk/gpt_spec.h 00:03:59.397 TEST_HEADER include/spdk/hexlify.h 00:03:59.397 CC examples/vmd/lsvmd/lsvmd.o 00:03:59.397 CC examples/idxd/perf/perf.o 00:03:59.397 CC examples/nvme/abort/abort.o 00:03:59.397 TEST_HEADER include/spdk/histogram_data.h 00:03:59.397 CC test/thread/poller_perf/poller_perf.o 00:03:59.397 TEST_HEADER include/spdk/idxd.h 00:03:59.397 CC test/nvme/reset/reset.o 00:03:59.397 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:59.397 TEST_HEADER include/spdk/idxd_spec.h 00:03:59.659 TEST_HEADER include/spdk/init.h 00:03:59.659 TEST_HEADER include/spdk/ioat.h 00:03:59.659 CC test/event/event_perf/event_perf.o 00:03:59.659 TEST_HEADER include/spdk/ioat_spec.h 00:03:59.659 TEST_HEADER include/spdk/iscsi_spec.h 00:03:59.659 TEST_HEADER include/spdk/json.h 00:03:59.659 CC app/spdk_tgt/spdk_tgt.o 00:03:59.659 TEST_HEADER include/spdk/jsonrpc.h 00:03:59.659 TEST_HEADER include/spdk/likely.h 00:03:59.659 TEST_HEADER include/spdk/log.h 00:03:59.659 TEST_HEADER include/spdk/lvol.h 00:03:59.659 TEST_HEADER include/spdk/memory.h 00:03:59.659 TEST_HEADER include/spdk/mmio.h 00:03:59.659 TEST_HEADER include/spdk/nbd.h 00:03:59.659 TEST_HEADER include/spdk/notify.h 00:03:59.659 CC test/bdev/bdevio/bdevio.o 00:03:59.659 TEST_HEADER include/spdk/nvme.h 00:03:59.659 CC examples/blob/hello_world/hello_blob.o 00:03:59.659 TEST_HEADER include/spdk/nvme_intel.h 00:03:59.659 CC app/fio/bdev/fio_plugin.o 00:03:59.659 CC examples/bdev/hello_world/hello_bdev.o 00:03:59.659 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:59.659 CC examples/bdev/bdevperf/bdevperf.o 00:03:59.659 CC test/dma/test_dma/test_dma.o 00:03:59.659 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:59.659 TEST_HEADER include/spdk/nvme_spec.h 00:03:59.659 TEST_HEADER include/spdk/nvme_zns.h 00:03:59.659 CC test/accel/dif/dif.o 00:03:59.659 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:59.659 CC examples/thread/thread/thread_ex.o 00:03:59.659 CC examples/nvmf/nvmf/nvmf.o 00:03:59.659 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:59.659 CC examples/blob/cli/blobcli.o 00:03:59.659 CC test/blobfs/mkfs/mkfs.o 00:03:59.659 CC test/app/bdev_svc/bdev_svc.o 00:03:59.659 TEST_HEADER include/spdk/nvmf.h 00:03:59.659 CC test/lvol/esnap/esnap.o 00:03:59.659 TEST_HEADER include/spdk/nvmf_spec.h 00:03:59.659 TEST_HEADER include/spdk/nvmf_transport.h 00:03:59.659 TEST_HEADER include/spdk/opal.h 00:03:59.659 TEST_HEADER include/spdk/opal_spec.h 00:03:59.659 TEST_HEADER include/spdk/pci_ids.h 00:03:59.659 TEST_HEADER include/spdk/pipe.h 00:03:59.659 TEST_HEADER include/spdk/queue.h 00:03:59.659 TEST_HEADER include/spdk/reduce.h 00:03:59.659 CC test/env/mem_callbacks/mem_callbacks.o 00:03:59.659 TEST_HEADER include/spdk/rpc.h 00:03:59.659 TEST_HEADER include/spdk/scheduler.h 00:03:59.659 TEST_HEADER include/spdk/scsi.h 00:03:59.659 TEST_HEADER include/spdk/scsi_spec.h 00:03:59.659 TEST_HEADER include/spdk/sock.h 00:03:59.659 TEST_HEADER include/spdk/stdinc.h 00:03:59.659 TEST_HEADER include/spdk/string.h 00:03:59.659 TEST_HEADER include/spdk/thread.h 00:03:59.659 TEST_HEADER include/spdk/trace.h 00:03:59.659 TEST_HEADER include/spdk/trace_parser.h 00:03:59.659 TEST_HEADER include/spdk/tree.h 00:03:59.659 TEST_HEADER include/spdk/ublk.h 00:03:59.659 TEST_HEADER include/spdk/util.h 00:03:59.659 LINK spdk_lspci 00:03:59.659 TEST_HEADER include/spdk/uuid.h 00:03:59.659 TEST_HEADER include/spdk/version.h 00:03:59.659 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:59.659 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:59.659 TEST_HEADER include/spdk/vhost.h 00:03:59.659 TEST_HEADER include/spdk/vmd.h 00:03:59.659 TEST_HEADER include/spdk/xor.h 00:03:59.659 TEST_HEADER include/spdk/zipf.h 00:03:59.659 CXX test/cpp_headers/accel.o 00:03:59.923 LINK lsvmd 00:03:59.923 LINK rpc_client_test 00:03:59.923 LINK zipf 00:03:59.923 LINK poller_perf 00:03:59.923 LINK spdk_nvme_discover 00:03:59.923 LINK interrupt_tgt 00:03:59.923 LINK event_perf 00:03:59.923 LINK vhost 00:03:59.923 LINK nvmf_tgt 00:03:59.923 LINK cmb_copy 00:03:59.923 LINK pmr_persistence 00:03:59.923 LINK spdk_trace_record 00:03:59.923 LINK iscsi_tgt 00:03:59.923 LINK ioat_perf 00:03:59.923 LINK hello_world 00:03:59.923 LINK spdk_tgt 00:03:59.923 LINK hotplug 00:03:59.923 LINK bdev_svc 00:03:59.923 LINK hello_sock 00:03:59.923 LINK mkfs 00:03:59.923 LINK reset 00:04:00.186 LINK hello_blob 00:04:00.186 LINK aer 00:04:00.186 LINK hello_bdev 00:04:00.186 CXX test/cpp_headers/accel_module.o 00:04:00.186 CC test/nvme/sgl/sgl.o 00:04:00.186 LINK thread 00:04:00.186 LINK arbitration 00:04:00.186 LINK reconnect 00:04:00.186 LINK idxd_perf 00:04:00.186 LINK spdk_dd 00:04:00.186 CXX test/cpp_headers/assert.o 00:04:00.186 LINK nvmf 00:04:00.186 CC examples/ioat/verify/verify.o 00:04:00.186 LINK abort 00:04:00.186 LINK spdk_trace 00:04:00.186 CC examples/vmd/led/led.o 00:04:00.186 CC test/env/vtophys/vtophys.o 00:04:00.186 CC test/event/reactor/reactor.o 00:04:00.186 CC test/nvme/e2edp/nvme_dp.o 00:04:00.186 LINK test_dma 00:04:00.186 CC test/event/reactor_perf/reactor_perf.o 00:04:00.186 CC test/nvme/overhead/overhead.o 00:04:00.452 LINK bdevio 00:04:00.452 LINK dif 00:04:00.452 CC test/nvme/err_injection/err_injection.o 00:04:00.452 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:00.452 CXX test/cpp_headers/barrier.o 00:04:00.452 CC test/nvme/startup/startup.o 00:04:00.452 LINK accel_perf 00:04:00.452 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:00.452 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:00.452 CC test/app/histogram_perf/histogram_perf.o 00:04:00.452 CXX test/cpp_headers/base64.o 00:04:00.452 CC test/nvme/reserve/reserve.o 00:04:00.452 CC test/event/app_repeat/app_repeat.o 00:04:00.452 LINK nvme_manage 00:04:00.452 CC test/nvme/simple_copy/simple_copy.o 00:04:00.452 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:00.452 CC test/nvme/connect_stress/connect_stress.o 00:04:00.452 CXX test/cpp_headers/bdev.o 00:04:00.452 LINK led 00:04:00.452 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:00.452 CXX test/cpp_headers/bdev_module.o 00:04:00.452 LINK spdk_nvme 00:04:00.452 LINK vtophys 00:04:00.452 LINK reactor 00:04:00.452 LINK spdk_bdev 00:04:00.452 CC test/event/scheduler/scheduler.o 00:04:00.452 CC test/env/memory/memory_ut.o 00:04:00.714 CC test/nvme/boot_partition/boot_partition.o 00:04:00.714 CC test/app/jsoncat/jsoncat.o 00:04:00.714 LINK reactor_perf 00:04:00.714 LINK blobcli 00:04:00.714 LINK sgl 00:04:00.714 CXX test/cpp_headers/bdev_zone.o 00:04:00.714 CC test/app/stub/stub.o 00:04:00.714 CC test/env/pci/pci_ut.o 00:04:00.714 CXX test/cpp_headers/bit_array.o 00:04:00.714 LINK verify 00:04:00.714 CC test/nvme/compliance/nvme_compliance.o 00:04:00.714 LINK histogram_perf 00:04:00.714 CC test/nvme/fused_ordering/fused_ordering.o 00:04:00.714 LINK startup 00:04:00.714 LINK err_injection 00:04:00.714 CXX test/cpp_headers/bit_pool.o 00:04:00.714 CXX test/cpp_headers/blob_bdev.o 00:04:00.714 CXX test/cpp_headers/blobfs_bdev.o 00:04:00.714 CXX test/cpp_headers/blobfs.o 00:04:00.714 LINK app_repeat 00:04:00.714 CXX test/cpp_headers/blob.o 00:04:00.714 CXX test/cpp_headers/conf.o 00:04:00.714 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:00.714 CC test/nvme/fdp/fdp.o 00:04:00.714 CXX test/cpp_headers/config.o 00:04:00.714 LINK nvme_dp 00:04:00.714 CC test/nvme/cuse/cuse.o 00:04:00.714 CXX test/cpp_headers/cpuset.o 00:04:00.985 LINK reserve 00:04:00.985 CXX test/cpp_headers/crc16.o 00:04:00.985 CXX test/cpp_headers/crc32.o 00:04:00.985 LINK env_dpdk_post_init 00:04:00.985 CXX test/cpp_headers/crc64.o 00:04:00.985 LINK overhead 00:04:00.985 CXX test/cpp_headers/dif.o 00:04:00.985 LINK jsoncat 00:04:00.985 LINK connect_stress 00:04:00.985 LINK mem_callbacks 00:04:00.985 CXX test/cpp_headers/dma.o 00:04:00.985 LINK boot_partition 00:04:00.985 LINK simple_copy 00:04:00.985 CXX test/cpp_headers/endian.o 00:04:00.985 CXX test/cpp_headers/env_dpdk.o 00:04:00.985 CXX test/cpp_headers/env.o 00:04:00.985 LINK stub 00:04:00.985 LINK spdk_nvme_perf 00:04:00.985 CXX test/cpp_headers/event.o 00:04:00.985 CXX test/cpp_headers/fd_group.o 00:04:00.985 CXX test/cpp_headers/fd.o 00:04:00.985 LINK scheduler 00:04:00.985 CXX test/cpp_headers/file.o 00:04:00.985 CXX test/cpp_headers/ftl.o 00:04:00.985 LINK bdevperf 00:04:00.985 CXX test/cpp_headers/gpt_spec.o 00:04:00.985 LINK spdk_nvme_identify 00:04:00.985 LINK spdk_top 00:04:00.985 CXX test/cpp_headers/hexlify.o 00:04:00.985 CXX test/cpp_headers/histogram_data.o 00:04:01.248 CXX test/cpp_headers/idxd.o 00:04:01.248 LINK nvme_fuzz 00:04:01.248 CXX test/cpp_headers/idxd_spec.o 00:04:01.248 LINK fused_ordering 00:04:01.248 CXX test/cpp_headers/init.o 00:04:01.248 CXX test/cpp_headers/ioat.o 00:04:01.248 CXX test/cpp_headers/ioat_spec.o 00:04:01.248 CXX test/cpp_headers/iscsi_spec.o 00:04:01.248 CXX test/cpp_headers/json.o 00:04:01.248 LINK doorbell_aers 00:04:01.248 CXX test/cpp_headers/jsonrpc.o 00:04:01.248 CXX test/cpp_headers/likely.o 00:04:01.248 CXX test/cpp_headers/log.o 00:04:01.248 CXX test/cpp_headers/lvol.o 00:04:01.248 CXX test/cpp_headers/memory.o 00:04:01.248 LINK vhost_fuzz 00:04:01.248 CXX test/cpp_headers/mmio.o 00:04:01.248 CXX test/cpp_headers/nbd.o 00:04:01.248 CXX test/cpp_headers/notify.o 00:04:01.248 CXX test/cpp_headers/nvme.o 00:04:01.248 CXX test/cpp_headers/nvme_intel.o 00:04:01.248 CXX test/cpp_headers/nvme_ocssd.o 00:04:01.248 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:01.248 CXX test/cpp_headers/nvme_spec.o 00:04:01.248 CXX test/cpp_headers/nvme_zns.o 00:04:01.248 LINK nvme_compliance 00:04:01.248 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.248 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.248 CXX test/cpp_headers/nvmf.o 00:04:01.248 CXX test/cpp_headers/nvmf_spec.o 00:04:01.248 CXX test/cpp_headers/nvmf_transport.o 00:04:01.248 CXX test/cpp_headers/opal.o 00:04:01.511 CXX test/cpp_headers/opal_spec.o 00:04:01.511 CXX test/cpp_headers/pci_ids.o 00:04:01.511 CXX test/cpp_headers/pipe.o 00:04:01.511 LINK pci_ut 00:04:01.511 CXX test/cpp_headers/queue.o 00:04:01.511 CXX test/cpp_headers/reduce.o 00:04:01.511 LINK fdp 00:04:01.511 CXX test/cpp_headers/rpc.o 00:04:01.511 CXX test/cpp_headers/scheduler.o 00:04:01.511 CXX test/cpp_headers/scsi.o 00:04:01.511 CXX test/cpp_headers/scsi_spec.o 00:04:01.511 CXX test/cpp_headers/sock.o 00:04:01.511 CXX test/cpp_headers/stdinc.o 00:04:01.511 CXX test/cpp_headers/string.o 00:04:01.511 CXX test/cpp_headers/thread.o 00:04:01.511 CXX test/cpp_headers/trace.o 00:04:01.511 CXX test/cpp_headers/trace_parser.o 00:04:01.511 CXX test/cpp_headers/tree.o 00:04:01.511 CXX test/cpp_headers/ublk.o 00:04:01.511 CXX test/cpp_headers/util.o 00:04:01.511 CXX test/cpp_headers/uuid.o 00:04:01.511 CXX test/cpp_headers/version.o 00:04:01.511 CXX test/cpp_headers/vfio_user_pci.o 00:04:01.511 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.511 CXX test/cpp_headers/vhost.o 00:04:01.511 CXX test/cpp_headers/vmd.o 00:04:01.511 CXX test/cpp_headers/xor.o 00:04:01.511 CXX test/cpp_headers/zipf.o 00:04:02.078 LINK memory_ut 00:04:02.337 LINK cuse 00:04:02.597 LINK iscsi_fuzz 00:04:05.131 LINK esnap 00:04:05.390 00:04:05.390 real 0m38.314s 00:04:05.390 user 7m15.687s 00:04:05.390 sys 1m39.594s 00:04:05.390 01:38:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:05.390 01:38:50 -- common/autotest_common.sh@10 -- $ set +x 00:04:05.390 ************************************ 00:04:05.390 END TEST make 00:04:05.390 ************************************ 00:04:05.390 01:38:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:05.390 01:38:50 -- nvmf/common.sh@7 -- # uname -s 00:04:05.390 01:38:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:05.390 01:38:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:05.390 01:38:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:05.390 01:38:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:05.390 01:38:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:05.390 01:38:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:05.390 01:38:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:05.390 01:38:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:05.390 01:38:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:05.390 01:38:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:05.390 01:38:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:05.390 01:38:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:05.390 01:38:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:05.390 01:38:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:05.390 01:38:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:05.390 01:38:50 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:05.390 01:38:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:05.390 01:38:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:05.390 01:38:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:05.390 01:38:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.390 01:38:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.390 01:38:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.390 01:38:50 -- paths/export.sh@5 -- # export PATH 00:04:05.390 01:38:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:05.390 01:38:50 -- nvmf/common.sh@46 -- # : 0 00:04:05.390 01:38:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:05.390 01:38:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:05.390 01:38:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:05.390 01:38:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:05.390 01:38:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:05.390 01:38:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:05.390 01:38:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:05.390 01:38:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:05.390 01:38:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:05.390 01:38:50 -- spdk/autotest.sh@32 -- # uname -s 00:04:05.390 01:38:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:05.390 01:38:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:05.391 01:38:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.391 01:38:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:05.391 01:38:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:05.391 01:38:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:05.391 01:38:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:05.391 01:38:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:05.391 01:38:50 -- spdk/autotest.sh@48 -- # udevadm_pid=2005940 00:04:05.391 01:38:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:05.391 01:38:50 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:05.391 01:38:50 -- spdk/autotest.sh@54 -- # echo 2005942 00:04:05.391 01:38:50 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:05.391 01:38:50 -- spdk/autotest.sh@56 -- # echo 2005943 00:04:05.391 01:38:50 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:04:05.391 01:38:50 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:04:05.391 01:38:50 -- spdk/autotest.sh@60 -- # echo 2005944 00:04:05.391 01:38:50 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:05.391 01:38:50 -- spdk/autotest.sh@62 -- # echo 2005945 00:04:05.391 01:38:50 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l 00:04:05.391 01:38:50 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:05.391 01:38:50 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:05.391 01:38:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:05.391 01:38:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 01:38:50 -- spdk/autotest.sh@70 -- # create_test_list 00:04:05.391 01:38:50 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:05.391 01:38:50 -- common/autotest_common.sh@10 -- # set +x 00:04:05.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:04:05.391 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:04:05.391 01:38:51 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:05.391 01:38:51 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.391 01:38:51 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.391 01:38:51 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:05.391 01:38:51 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:05.391 01:38:51 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:05.391 01:38:51 -- common/autotest_common.sh@1440 -- # uname 00:04:05.391 01:38:51 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:05.391 01:38:51 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:05.391 01:38:51 -- common/autotest_common.sh@1460 -- # uname 00:04:05.391 01:38:51 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:05.391 01:38:51 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:05.391 01:38:51 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:05.391 01:38:51 -- spdk/autotest.sh@83 -- # hash lcov 00:04:05.391 01:38:51 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:05.391 01:38:51 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:05.391 --rc lcov_branch_coverage=1 00:04:05.391 --rc lcov_function_coverage=1 00:04:05.391 --rc genhtml_branch_coverage=1 00:04:05.391 --rc genhtml_function_coverage=1 00:04:05.391 --rc genhtml_legend=1 00:04:05.391 --rc geninfo_all_blocks=1 00:04:05.391 ' 00:04:05.391 01:38:51 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:05.391 --rc lcov_branch_coverage=1 00:04:05.391 --rc lcov_function_coverage=1 00:04:05.391 --rc genhtml_branch_coverage=1 00:04:05.391 --rc genhtml_function_coverage=1 00:04:05.391 --rc genhtml_legend=1 00:04:05.391 --rc geninfo_all_blocks=1 00:04:05.391 ' 00:04:05.391 01:38:51 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:05.391 --rc lcov_branch_coverage=1 00:04:05.391 --rc lcov_function_coverage=1 00:04:05.391 --rc genhtml_branch_coverage=1 00:04:05.391 --rc genhtml_function_coverage=1 00:04:05.391 --rc genhtml_legend=1 00:04:05.391 --rc geninfo_all_blocks=1 00:04:05.391 --no-external' 00:04:05.391 01:38:51 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:05.391 --rc lcov_branch_coverage=1 00:04:05.391 --rc lcov_function_coverage=1 00:04:05.391 --rc genhtml_branch_coverage=1 00:04:05.391 --rc genhtml_function_coverage=1 00:04:05.391 --rc genhtml_legend=1 00:04:05.391 --rc geninfo_all_blocks=1 00:04:05.391 --no-external' 00:04:05.391 01:38:51 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:05.650 lcov: LCOV version 1.14 00:04:05.650 01:38:51 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:20.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:20.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:20.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:20.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:20.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:20.560 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:35.437 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:35.437 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:35.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:35.438 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:35.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:35.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:36.373 01:39:21 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:04:36.373 01:39:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:36.373 01:39:21 -- common/autotest_common.sh@10 -- # set +x 00:04:36.373 01:39:21 -- spdk/autotest.sh@102 -- # rm -f 00:04:36.373 01:39:21 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.749 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:37.749 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:37.749 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:37.749 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:37.749 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:37.749 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:37.749 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:37.749 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:37.749 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:37.749 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:37.749 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:37.749 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:37.749 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:37.749 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:37.749 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:37.749 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:37.749 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:38.007 01:39:23 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:04:38.007 01:39:23 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:38.007 01:39:23 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:38.007 01:39:23 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:38.007 01:39:23 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:38.007 01:39:23 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:38.007 01:39:23 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:38.007 01:39:23 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:38.007 01:39:23 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:38.007 01:39:23 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:04:38.007 01:39:23 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:04:38.007 01:39:23 -- spdk/autotest.sh@121 -- # grep -v p 00:04:38.007 01:39:23 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:04:38.007 01:39:23 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:04:38.007 01:39:23 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:04:38.007 01:39:23 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:04:38.007 01:39:23 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:38.007 No valid GPT data, bailing 00:04:38.007 01:39:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:38.007 01:39:23 -- scripts/common.sh@393 -- # pt= 00:04:38.007 01:39:23 -- scripts/common.sh@394 -- # return 1 00:04:38.007 01:39:23 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:38.007 1+0 records in 00:04:38.007 1+0 records out 00:04:38.007 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00245628 s, 427 MB/s 00:04:38.007 01:39:23 -- spdk/autotest.sh@129 -- # sync 00:04:38.007 01:39:23 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:38.007 01:39:23 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:38.007 01:39:23 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:39.907 01:39:25 -- spdk/autotest.sh@135 -- # uname -s 00:04:39.907 01:39:25 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:04:39.908 01:39:25 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.908 01:39:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.908 01:39:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.908 01:39:25 -- common/autotest_common.sh@10 -- # set +x 00:04:39.908 ************************************ 00:04:39.908 START TEST setup.sh 00:04:39.908 ************************************ 00:04:39.908 01:39:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:39.908 * Looking for test storage... 00:04:39.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.908 01:39:25 -- setup/test-setup.sh@10 -- # uname -s 00:04:39.908 01:39:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:39.908 01:39:25 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.908 01:39:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:39.908 01:39:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:39.908 01:39:25 -- common/autotest_common.sh@10 -- # set +x 00:04:39.908 ************************************ 00:04:39.908 START TEST acl 00:04:39.908 ************************************ 00:04:39.908 01:39:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:39.908 * Looking for test storage... 00:04:39.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:39.908 01:39:25 -- setup/acl.sh@10 -- # get_zoned_devs 00:04:39.908 01:39:25 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:39.908 01:39:25 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:39.908 01:39:25 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:39.908 01:39:25 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:39.908 01:39:25 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:39.908 01:39:25 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:39.908 01:39:25 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.908 01:39:25 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:39.908 01:39:25 -- setup/acl.sh@12 -- # devs=() 00:04:39.908 01:39:25 -- setup/acl.sh@12 -- # declare -a devs 00:04:39.908 01:39:25 -- setup/acl.sh@13 -- # drivers=() 00:04:39.908 01:39:25 -- setup/acl.sh@13 -- # declare -A drivers 00:04:40.165 01:39:25 -- setup/acl.sh@51 -- # setup reset 00:04:40.165 01:39:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:40.165 01:39:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.541 01:39:27 -- setup/acl.sh@52 -- # collect_setup_devs 00:04:41.541 01:39:27 -- setup/acl.sh@16 -- # local dev driver 00:04:41.541 01:39:27 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:41.541 01:39:27 -- setup/acl.sh@15 -- # setup output status 00:04:41.541 01:39:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.541 01:39:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:42.477 Hugepages 00:04:42.477 node hugesize free / total 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 00:04:42.477 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.477 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.477 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.477 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.478 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.478 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.478 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.737 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:42.737 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.737 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.737 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.737 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:42.737 01:39:28 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:42.737 01:39:28 -- setup/acl.sh@20 -- # continue 00:04:42.737 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.737 01:39:28 -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:42.737 01:39:28 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:42.737 01:39:28 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:42.737 01:39:28 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:42.737 01:39:28 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:42.737 01:39:28 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:42.737 01:39:28 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:42.737 01:39:28 -- setup/acl.sh@54 -- # run_test denied denied 00:04:42.737 01:39:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:42.737 01:39:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:42.737 01:39:28 -- common/autotest_common.sh@10 -- # set +x 00:04:42.737 ************************************ 00:04:42.737 START TEST denied 00:04:42.737 ************************************ 00:04:42.737 01:39:28 -- common/autotest_common.sh@1104 -- # denied 00:04:42.737 01:39:28 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:42.737 01:39:28 -- setup/acl.sh@38 -- # setup output config 00:04:42.737 01:39:28 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:42.737 01:39:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.737 01:39:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.114 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:44.114 01:39:29 -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:44.114 01:39:29 -- setup/acl.sh@28 -- # local dev driver 00:04:44.114 01:39:29 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:44.114 01:39:29 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:44.114 01:39:29 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:44.114 01:39:29 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:44.114 01:39:29 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:44.114 01:39:29 -- setup/acl.sh@41 -- # setup reset 00:04:44.114 01:39:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.114 01:39:29 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:46.693 00:04:46.693 real 0m3.910s 00:04:46.693 user 0m1.137s 00:04:46.693 sys 0m1.846s 00:04:46.693 01:39:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.693 01:39:32 -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 ************************************ 00:04:46.693 END TEST denied 00:04:46.693 ************************************ 00:04:46.693 01:39:32 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:46.693 01:39:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:46.693 01:39:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:46.693 01:39:32 -- common/autotest_common.sh@10 -- # set +x 00:04:46.693 ************************************ 00:04:46.693 START TEST allowed 00:04:46.693 ************************************ 00:04:46.693 01:39:32 -- common/autotest_common.sh@1104 -- # allowed 00:04:46.693 01:39:32 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:46.693 01:39:32 -- setup/acl.sh@45 -- # setup output config 00:04:46.693 01:39:32 -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:46.693 01:39:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.693 01:39:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:49.223 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:49.223 01:39:34 -- setup/acl.sh@47 -- # verify 00:04:49.223 01:39:34 -- setup/acl.sh@28 -- # local dev driver 00:04:49.223 01:39:34 -- setup/acl.sh@48 -- # setup reset 00:04:49.223 01:39:34 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.223 01:39:34 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:50.599 00:04:50.599 real 0m3.881s 00:04:50.599 user 0m1.090s 00:04:50.599 sys 0m1.651s 00:04:50.599 01:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.599 01:39:36 -- common/autotest_common.sh@10 -- # set +x 00:04:50.599 ************************************ 00:04:50.599 END TEST allowed 00:04:50.599 ************************************ 00:04:50.599 00:04:50.599 real 0m10.517s 00:04:50.599 user 0m3.271s 00:04:50.599 sys 0m5.267s 00:04:50.599 01:39:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:50.599 01:39:36 -- common/autotest_common.sh@10 -- # set +x 00:04:50.599 ************************************ 00:04:50.599 END TEST acl 00:04:50.599 ************************************ 00:04:50.599 01:39:36 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.599 01:39:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.599 01:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.599 01:39:36 -- common/autotest_common.sh@10 -- # set +x 00:04:50.599 ************************************ 00:04:50.599 START TEST hugepages 00:04:50.599 ************************************ 00:04:50.599 01:39:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:50.599 * Looking for test storage... 00:04:50.599 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:50.599 01:39:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:50.599 01:39:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:50.599 01:39:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:50.599 01:39:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:50.599 01:39:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:50.599 01:39:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:50.599 01:39:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:50.599 01:39:36 -- setup/common.sh@18 -- # local node= 00:04:50.599 01:39:36 -- setup/common.sh@19 -- # local var val 00:04:50.599 01:39:36 -- setup/common.sh@20 -- # local mem_f mem 00:04:50.599 01:39:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.599 01:39:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.599 01:39:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.599 01:39:36 -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.599 01:39:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 33724664 kB' 'MemAvailable: 37436364 kB' 'Buffers: 2696 kB' 'Cached: 20186700 kB' 'SwapCached: 0 kB' 'Active: 17146328 kB' 'Inactive: 3504240 kB' 'Active(anon): 16560196 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464480 kB' 'Mapped: 229016 kB' 'Shmem: 16099024 kB' 'KReclaimable: 229904 kB' 'Slab: 620964 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391060 kB' 'KernelStack: 12880 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 17699440 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197004 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.599 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.599 01:39:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # continue 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # IFS=': ' 00:04:50.600 01:39:36 -- setup/common.sh@31 -- # read -r var val _ 00:04:50.600 01:39:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:50.600 01:39:36 -- setup/common.sh@33 -- # echo 2048 00:04:50.600 01:39:36 -- setup/common.sh@33 -- # return 0 00:04:50.600 01:39:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:50.600 01:39:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:50.600 01:39:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:50.600 01:39:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:50.600 01:39:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:50.600 01:39:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:50.600 01:39:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:50.600 01:39:36 -- setup/hugepages.sh@207 -- # get_nodes 00:04:50.600 01:39:36 -- setup/hugepages.sh@27 -- # local node 00:04:50.600 01:39:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.600 01:39:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:50.600 01:39:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.600 01:39:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.600 01:39:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.600 01:39:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.600 01:39:36 -- setup/hugepages.sh@208 -- # clear_hp 00:04:50.600 01:39:36 -- setup/hugepages.sh@37 -- # local node hp 00:04:50.600 01:39:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.600 01:39:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.600 01:39:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.600 01:39:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.600 01:39:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:50.601 01:39:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.601 01:39:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:50.601 01:39:36 -- setup/hugepages.sh@41 -- # echo 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:50.601 01:39:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:50.601 01:39:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:50.601 01:39:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:50.601 01:39:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:50.601 01:39:36 -- common/autotest_common.sh@10 -- # set +x 00:04:50.601 ************************************ 00:04:50.601 START TEST default_setup 00:04:50.601 ************************************ 00:04:50.601 01:39:36 -- common/autotest_common.sh@1104 -- # default_setup 00:04:50.601 01:39:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:50.601 01:39:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:50.601 01:39:36 -- setup/hugepages.sh@51 -- # shift 00:04:50.601 01:39:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:50.601 01:39:36 -- setup/hugepages.sh@52 -- # local node_ids 00:04:50.601 01:39:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:50.601 01:39:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:50.601 01:39:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:50.601 01:39:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:50.601 01:39:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:50.601 01:39:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:50.601 01:39:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:50.601 01:39:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:50.601 01:39:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:50.601 01:39:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:50.601 01:39:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:50.601 01:39:36 -- setup/hugepages.sh@73 -- # return 0 00:04:50.601 01:39:36 -- setup/hugepages.sh@137 -- # setup output 00:04:50.601 01:39:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.601 01:39:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:51.976 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.976 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:51.976 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:52.913 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:52.913 01:39:38 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:52.913 01:39:38 -- setup/hugepages.sh@89 -- # local node 00:04:52.913 01:39:38 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.913 01:39:38 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.913 01:39:38 -- setup/hugepages.sh@92 -- # local surp 00:04:52.913 01:39:38 -- setup/hugepages.sh@93 -- # local resv 00:04:52.913 01:39:38 -- setup/hugepages.sh@94 -- # local anon 00:04:52.913 01:39:38 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.913 01:39:38 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.913 01:39:38 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.913 01:39:38 -- setup/common.sh@18 -- # local node= 00:04:52.913 01:39:38 -- setup/common.sh@19 -- # local var val 00:04:52.913 01:39:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.913 01:39:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.913 01:39:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.913 01:39:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.913 01:39:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.913 01:39:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.913 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.913 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35815556 kB' 'MemAvailable: 39527256 kB' 'Buffers: 2696 kB' 'Cached: 20186788 kB' 'SwapCached: 0 kB' 'Active: 17165148 kB' 'Inactive: 3504240 kB' 'Active(anon): 16579016 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483288 kB' 'Mapped: 229032 kB' 'Shmem: 16099112 kB' 'KReclaimable: 229904 kB' 'Slab: 620672 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390768 kB' 'KernelStack: 12960 kB' 'PageTables: 8824 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.914 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.914 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.915 01:39:38 -- setup/common.sh@33 -- # echo 0 00:04:52.915 01:39:38 -- setup/common.sh@33 -- # return 0 00:04:52.915 01:39:38 -- setup/hugepages.sh@97 -- # anon=0 00:04:52.915 01:39:38 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.915 01:39:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.915 01:39:38 -- setup/common.sh@18 -- # local node= 00:04:52.915 01:39:38 -- setup/common.sh@19 -- # local var val 00:04:52.915 01:39:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:52.915 01:39:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.915 01:39:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.915 01:39:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.915 01:39:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.915 01:39:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35824996 kB' 'MemAvailable: 39536696 kB' 'Buffers: 2696 kB' 'Cached: 20186788 kB' 'SwapCached: 0 kB' 'Active: 17164460 kB' 'Inactive: 3504240 kB' 'Active(anon): 16578328 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482516 kB' 'Mapped: 229116 kB' 'Shmem: 16099112 kB' 'KReclaimable: 229904 kB' 'Slab: 620864 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390960 kB' 'KernelStack: 12848 kB' 'PageTables: 8748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.915 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.915 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.916 01:39:38 -- setup/common.sh@32 -- # continue 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:52.916 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.178 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.178 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.178 01:39:38 -- setup/common.sh@33 -- # echo 0 00:04:53.178 01:39:38 -- setup/common.sh@33 -- # return 0 00:04:53.178 01:39:38 -- setup/hugepages.sh@99 -- # surp=0 00:04:53.178 01:39:38 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:53.178 01:39:38 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:53.178 01:39:38 -- setup/common.sh@18 -- # local node= 00:04:53.178 01:39:38 -- setup/common.sh@19 -- # local var val 00:04:53.178 01:39:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.178 01:39:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.178 01:39:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.178 01:39:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.179 01:39:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.179 01:39:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35825108 kB' 'MemAvailable: 39536808 kB' 'Buffers: 2696 kB' 'Cached: 20186796 kB' 'SwapCached: 0 kB' 'Active: 17163068 kB' 'Inactive: 3504240 kB' 'Active(anon): 16576936 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481100 kB' 'Mapped: 229116 kB' 'Shmem: 16099120 kB' 'KReclaimable: 229904 kB' 'Slab: 620864 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390960 kB' 'KernelStack: 12864 kB' 'PageTables: 8676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.179 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.179 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:53.180 01:39:38 -- setup/common.sh@33 -- # echo 0 00:04:53.180 01:39:38 -- setup/common.sh@33 -- # return 0 00:04:53.180 01:39:38 -- setup/hugepages.sh@100 -- # resv=0 00:04:53.180 01:39:38 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:53.180 nr_hugepages=1024 00:04:53.180 01:39:38 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:53.180 resv_hugepages=0 00:04:53.180 01:39:38 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:53.180 surplus_hugepages=0 00:04:53.180 01:39:38 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:53.180 anon_hugepages=0 00:04:53.180 01:39:38 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.180 01:39:38 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:53.180 01:39:38 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:53.180 01:39:38 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:53.180 01:39:38 -- setup/common.sh@18 -- # local node= 00:04:53.180 01:39:38 -- setup/common.sh@19 -- # local var val 00:04:53.180 01:39:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.180 01:39:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.180 01:39:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:53.180 01:39:38 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:53.180 01:39:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.180 01:39:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35825032 kB' 'MemAvailable: 39536732 kB' 'Buffers: 2696 kB' 'Cached: 20186816 kB' 'SwapCached: 0 kB' 'Active: 17163048 kB' 'Inactive: 3504240 kB' 'Active(anon): 16576916 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481040 kB' 'Mapped: 229040 kB' 'Shmem: 16099140 kB' 'KReclaimable: 229904 kB' 'Slab: 620876 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390972 kB' 'KernelStack: 12880 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197148 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.180 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.180 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.181 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.181 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:53.181 01:39:38 -- setup/common.sh@33 -- # echo 1024 00:04:53.181 01:39:38 -- setup/common.sh@33 -- # return 0 00:04:53.181 01:39:38 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:53.181 01:39:38 -- setup/hugepages.sh@112 -- # get_nodes 00:04:53.181 01:39:38 -- setup/hugepages.sh@27 -- # local node 00:04:53.181 01:39:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.181 01:39:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:53.181 01:39:38 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:53.182 01:39:38 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:53.182 01:39:38 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:53.182 01:39:38 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:53.182 01:39:38 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:53.182 01:39:38 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:53.182 01:39:38 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:53.182 01:39:38 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:53.182 01:39:38 -- setup/common.sh@18 -- # local node=0 00:04:53.182 01:39:38 -- setup/common.sh@19 -- # local var val 00:04:53.182 01:39:38 -- setup/common.sh@20 -- # local mem_f mem 00:04:53.182 01:39:38 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:53.182 01:39:38 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:53.182 01:39:38 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:53.182 01:39:38 -- setup/common.sh@28 -- # mapfile -t mem 00:04:53.182 01:39:38 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16448896 kB' 'MemUsed: 16380988 kB' 'SwapCached: 0 kB' 'Active: 9956528 kB' 'Inactive: 3323444 kB' 'Active(anon): 9577216 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991676 kB' 'Mapped: 126144 kB' 'AnonPages: 291392 kB' 'Shmem: 9288920 kB' 'KernelStack: 7112 kB' 'PageTables: 4424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316784 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.182 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.182 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # continue 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # IFS=': ' 00:04:53.183 01:39:38 -- setup/common.sh@31 -- # read -r var val _ 00:04:53.183 01:39:38 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:53.183 01:39:38 -- setup/common.sh@33 -- # echo 0 00:04:53.183 01:39:38 -- setup/common.sh@33 -- # return 0 00:04:53.183 01:39:38 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:53.183 01:39:38 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:53.183 01:39:38 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:53.183 01:39:38 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:53.183 01:39:38 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:53.183 node0=1024 expecting 1024 00:04:53.183 01:39:38 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:53.183 00:04:53.183 real 0m2.483s 00:04:53.183 user 0m0.665s 00:04:53.183 sys 0m0.933s 00:04:53.183 01:39:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.183 01:39:38 -- common/autotest_common.sh@10 -- # set +x 00:04:53.183 ************************************ 00:04:53.183 END TEST default_setup 00:04:53.183 ************************************ 00:04:53.183 01:39:38 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:53.183 01:39:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:53.183 01:39:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:53.183 01:39:38 -- common/autotest_common.sh@10 -- # set +x 00:04:53.183 ************************************ 00:04:53.183 START TEST per_node_1G_alloc 00:04:53.183 ************************************ 00:04:53.183 01:39:38 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:04:53.183 01:39:38 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:53.183 01:39:38 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:53.183 01:39:38 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:53.183 01:39:38 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:53.183 01:39:38 -- setup/hugepages.sh@51 -- # shift 00:04:53.183 01:39:38 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:53.183 01:39:38 -- setup/hugepages.sh@52 -- # local node_ids 00:04:53.183 01:39:38 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:53.183 01:39:38 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:53.183 01:39:38 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:53.183 01:39:38 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:53.183 01:39:38 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:53.183 01:39:38 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:53.183 01:39:38 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:53.183 01:39:38 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:53.183 01:39:38 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:53.183 01:39:38 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:53.183 01:39:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.183 01:39:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:53.183 01:39:38 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:53.183 01:39:38 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:53.183 01:39:38 -- setup/hugepages.sh@73 -- # return 0 00:04:53.183 01:39:38 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:53.183 01:39:38 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:53.183 01:39:38 -- setup/hugepages.sh@146 -- # setup output 00:04:53.183 01:39:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:53.183 01:39:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:54.118 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.118 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:54.118 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.118 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.118 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.118 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.118 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.118 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.118 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.118 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:54.380 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:54.380 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:54.380 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:54.380 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:54.380 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:54.380 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:54.380 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:54.380 01:39:39 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:54.380 01:39:39 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:54.380 01:39:39 -- setup/hugepages.sh@89 -- # local node 00:04:54.380 01:39:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:54.380 01:39:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:54.380 01:39:39 -- setup/hugepages.sh@92 -- # local surp 00:04:54.380 01:39:39 -- setup/hugepages.sh@93 -- # local resv 00:04:54.380 01:39:39 -- setup/hugepages.sh@94 -- # local anon 00:04:54.380 01:39:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:54.380 01:39:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:54.380 01:39:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:54.380 01:39:39 -- setup/common.sh@18 -- # local node= 00:04:54.380 01:39:39 -- setup/common.sh@19 -- # local var val 00:04:54.380 01:39:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.380 01:39:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.380 01:39:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.380 01:39:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.380 01:39:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.380 01:39:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35844800 kB' 'MemAvailable: 39556500 kB' 'Buffers: 2696 kB' 'Cached: 20186864 kB' 'SwapCached: 0 kB' 'Active: 17163344 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577212 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481204 kB' 'Mapped: 229124 kB' 'Shmem: 16099188 kB' 'KReclaimable: 229904 kB' 'Slab: 621004 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391100 kB' 'KernelStack: 12896 kB' 'PageTables: 8692 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.380 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.380 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:54.381 01:39:39 -- setup/common.sh@33 -- # echo 0 00:04:54.381 01:39:39 -- setup/common.sh@33 -- # return 0 00:04:54.381 01:39:39 -- setup/hugepages.sh@97 -- # anon=0 00:04:54.381 01:39:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:54.381 01:39:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.381 01:39:39 -- setup/common.sh@18 -- # local node= 00:04:54.381 01:39:39 -- setup/common.sh@19 -- # local var val 00:04:54.381 01:39:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.381 01:39:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.381 01:39:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.381 01:39:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.381 01:39:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.381 01:39:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35847432 kB' 'MemAvailable: 39559132 kB' 'Buffers: 2696 kB' 'Cached: 20186864 kB' 'SwapCached: 0 kB' 'Active: 17163856 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577724 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481724 kB' 'Mapped: 229124 kB' 'Shmem: 16099188 kB' 'KReclaimable: 229904 kB' 'Slab: 620980 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391076 kB' 'KernelStack: 12912 kB' 'PageTables: 8732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197228 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.381 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.381 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.382 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.382 01:39:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.383 01:39:39 -- setup/common.sh@33 -- # echo 0 00:04:54.383 01:39:39 -- setup/common.sh@33 -- # return 0 00:04:54.383 01:39:39 -- setup/hugepages.sh@99 -- # surp=0 00:04:54.383 01:39:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:54.383 01:39:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:54.383 01:39:39 -- setup/common.sh@18 -- # local node= 00:04:54.383 01:39:39 -- setup/common.sh@19 -- # local var val 00:04:54.383 01:39:39 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.383 01:39:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.383 01:39:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.383 01:39:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.383 01:39:39 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.383 01:39:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35847700 kB' 'MemAvailable: 39559400 kB' 'Buffers: 2696 kB' 'Cached: 20186864 kB' 'SwapCached: 0 kB' 'Active: 17163960 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577828 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481872 kB' 'Mapped: 229120 kB' 'Shmem: 16099188 kB' 'KReclaimable: 229904 kB' 'Slab: 620980 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391076 kB' 'KernelStack: 12928 kB' 'PageTables: 8724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:39 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:39 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.383 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.383 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.384 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.384 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:54.384 01:39:40 -- setup/common.sh@33 -- # echo 0 00:04:54.384 01:39:40 -- setup/common.sh@33 -- # return 0 00:04:54.384 01:39:40 -- setup/hugepages.sh@100 -- # resv=0 00:04:54.384 01:39:40 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:54.384 nr_hugepages=1024 00:04:54.384 01:39:40 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:54.384 resv_hugepages=0 00:04:54.384 01:39:40 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:54.384 surplus_hugepages=0 00:04:54.384 01:39:40 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:54.384 anon_hugepages=0 00:04:54.384 01:39:40 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.384 01:39:40 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:54.384 01:39:40 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:54.384 01:39:40 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:54.384 01:39:40 -- setup/common.sh@18 -- # local node= 00:04:54.384 01:39:40 -- setup/common.sh@19 -- # local var val 00:04:54.385 01:39:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.385 01:39:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.385 01:39:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:54.385 01:39:40 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:54.385 01:39:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.385 01:39:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.385 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.385 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.385 01:39:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35849504 kB' 'MemAvailable: 39561204 kB' 'Buffers: 2696 kB' 'Cached: 20186880 kB' 'SwapCached: 0 kB' 'Active: 17164132 kB' 'Inactive: 3504240 kB' 'Active(anon): 16578000 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482128 kB' 'Mapped: 229044 kB' 'Shmem: 16099204 kB' 'KReclaimable: 229904 kB' 'Slab: 620956 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391052 kB' 'KernelStack: 12896 kB' 'PageTables: 8616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.645 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.645 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.646 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.646 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:54.646 01:39:40 -- setup/common.sh@33 -- # echo 1024 00:04:54.646 01:39:40 -- setup/common.sh@33 -- # return 0 00:04:54.646 01:39:40 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:54.646 01:39:40 -- setup/hugepages.sh@112 -- # get_nodes 00:04:54.646 01:39:40 -- setup/hugepages.sh@27 -- # local node 00:04:54.646 01:39:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.646 01:39:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.646 01:39:40 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:54.647 01:39:40 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:54.647 01:39:40 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:54.647 01:39:40 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:54.647 01:39:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.647 01:39:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.647 01:39:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:54.647 01:39:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.647 01:39:40 -- setup/common.sh@18 -- # local node=0 00:04:54.647 01:39:40 -- setup/common.sh@19 -- # local var val 00:04:54.647 01:39:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.647 01:39:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.647 01:39:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:54.647 01:39:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:54.647 01:39:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.647 01:39:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17500456 kB' 'MemUsed: 15329428 kB' 'SwapCached: 0 kB' 'Active: 9957320 kB' 'Inactive: 3323444 kB' 'Active(anon): 9578008 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991680 kB' 'Mapped: 126148 kB' 'AnonPages: 292324 kB' 'Shmem: 9288924 kB' 'KernelStack: 7128 kB' 'PageTables: 4472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316708 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196524 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.647 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.647 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@33 -- # echo 0 00:04:54.648 01:39:40 -- setup/common.sh@33 -- # return 0 00:04:54.648 01:39:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.648 01:39:40 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:54.648 01:39:40 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:54.648 01:39:40 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:54.648 01:39:40 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:54.648 01:39:40 -- setup/common.sh@18 -- # local node=1 00:04:54.648 01:39:40 -- setup/common.sh@19 -- # local var val 00:04:54.648 01:39:40 -- setup/common.sh@20 -- # local mem_f mem 00:04:54.648 01:39:40 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:54.648 01:39:40 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:54.648 01:39:40 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:54.648 01:39:40 -- setup/common.sh@28 -- # mapfile -t mem 00:04:54.648 01:39:40 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18348080 kB' 'MemUsed: 9363756 kB' 'SwapCached: 0 kB' 'Active: 7206792 kB' 'Inactive: 180796 kB' 'Active(anon): 6999972 kB' 'Inactive(anon): 0 kB' 'Active(file): 206820 kB' 'Inactive(file): 180796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7197924 kB' 'Mapped: 102896 kB' 'AnonPages: 189780 kB' 'Shmem: 6810308 kB' 'KernelStack: 5800 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109720 kB' 'Slab: 304248 kB' 'SReclaimable: 109720 kB' 'SUnreclaim: 194528 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.648 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.648 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # continue 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # IFS=': ' 00:04:54.649 01:39:40 -- setup/common.sh@31 -- # read -r var val _ 00:04:54.649 01:39:40 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:54.649 01:39:40 -- setup/common.sh@33 -- # echo 0 00:04:54.649 01:39:40 -- setup/common.sh@33 -- # return 0 00:04:54.649 01:39:40 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.649 01:39:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.649 01:39:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.649 01:39:40 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:54.649 node0=512 expecting 512 00:04:54.649 01:39:40 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:54.649 01:39:40 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:54.649 01:39:40 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:54.649 01:39:40 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:54.649 node1=512 expecting 512 00:04:54.649 01:39:40 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:54.649 00:04:54.649 real 0m1.426s 00:04:54.649 user 0m0.559s 00:04:54.649 sys 0m0.826s 00:04:54.649 01:39:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.649 01:39:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.649 ************************************ 00:04:54.649 END TEST per_node_1G_alloc 00:04:54.649 ************************************ 00:04:54.649 01:39:40 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:54.649 01:39:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:54.649 01:39:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:54.649 01:39:40 -- common/autotest_common.sh@10 -- # set +x 00:04:54.649 ************************************ 00:04:54.649 START TEST even_2G_alloc 00:04:54.649 ************************************ 00:04:54.649 01:39:40 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:04:54.649 01:39:40 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:54.649 01:39:40 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:54.649 01:39:40 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:54.649 01:39:40 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:54.649 01:39:40 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:54.649 01:39:40 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:54.649 01:39:40 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:54.649 01:39:40 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:54.649 01:39:40 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:54.649 01:39:40 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:54.649 01:39:40 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.649 01:39:40 -- setup/hugepages.sh@83 -- # : 512 00:04:54.649 01:39:40 -- setup/hugepages.sh@84 -- # : 1 00:04:54.649 01:39:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:54.649 01:39:40 -- setup/hugepages.sh@83 -- # : 0 00:04:54.649 01:39:40 -- setup/hugepages.sh@84 -- # : 0 00:04:54.649 01:39:40 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:54.649 01:39:40 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:54.649 01:39:40 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:54.649 01:39:40 -- setup/hugepages.sh@153 -- # setup output 00:04:54.649 01:39:40 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.649 01:39:40 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:55.585 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.585 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:55.585 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.585 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.585 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.585 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.585 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.585 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.585 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.585 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:55.585 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:55.585 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:55.585 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:55.585 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:55.585 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:55.585 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:55.585 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:55.848 01:39:41 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:55.848 01:39:41 -- setup/hugepages.sh@89 -- # local node 00:04:55.848 01:39:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:55.848 01:39:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:55.848 01:39:41 -- setup/hugepages.sh@92 -- # local surp 00:04:55.848 01:39:41 -- setup/hugepages.sh@93 -- # local resv 00:04:55.848 01:39:41 -- setup/hugepages.sh@94 -- # local anon 00:04:55.848 01:39:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:55.848 01:39:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:55.848 01:39:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:55.848 01:39:41 -- setup/common.sh@18 -- # local node= 00:04:55.848 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.848 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.848 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.848 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.848 01:39:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.848 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.848 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35819840 kB' 'MemAvailable: 39531540 kB' 'Buffers: 2696 kB' 'Cached: 20186964 kB' 'SwapCached: 0 kB' 'Active: 17166584 kB' 'Inactive: 3504240 kB' 'Active(anon): 16580452 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484324 kB' 'Mapped: 229060 kB' 'Shmem: 16099288 kB' 'KReclaimable: 229904 kB' 'Slab: 621084 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391180 kB' 'KernelStack: 12912 kB' 'PageTables: 8632 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719376 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197212 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.848 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.848 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:55.849 01:39:41 -- setup/common.sh@33 -- # echo 0 00:04:55.849 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.849 01:39:41 -- setup/hugepages.sh@97 -- # anon=0 00:04:55.849 01:39:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:55.849 01:39:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.849 01:39:41 -- setup/common.sh@18 -- # local node= 00:04:55.849 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.849 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.849 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.849 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.849 01:39:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.849 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.849 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35820232 kB' 'MemAvailable: 39531932 kB' 'Buffers: 2696 kB' 'Cached: 20186968 kB' 'SwapCached: 0 kB' 'Active: 17167304 kB' 'Inactive: 3504240 kB' 'Active(anon): 16581172 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 485144 kB' 'Mapped: 229136 kB' 'Shmem: 16099292 kB' 'KReclaimable: 229904 kB' 'Slab: 621116 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391212 kB' 'KernelStack: 12960 kB' 'PageTables: 8800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719888 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.849 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.849 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.850 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.850 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.850 01:39:41 -- setup/common.sh@33 -- # echo 0 00:04:55.850 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.850 01:39:41 -- setup/hugepages.sh@99 -- # surp=0 00:04:55.850 01:39:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:55.851 01:39:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:55.851 01:39:41 -- setup/common.sh@18 -- # local node= 00:04:55.851 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.851 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.851 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.851 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.851 01:39:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.851 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.851 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35820792 kB' 'MemAvailable: 39532492 kB' 'Buffers: 2696 kB' 'Cached: 20186980 kB' 'SwapCached: 0 kB' 'Active: 17166432 kB' 'Inactive: 3504240 kB' 'Active(anon): 16580300 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484252 kB' 'Mapped: 229052 kB' 'Shmem: 16099304 kB' 'KReclaimable: 229904 kB' 'Slab: 621148 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391244 kB' 'KernelStack: 12976 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.851 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.851 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:55.852 01:39:41 -- setup/common.sh@33 -- # echo 0 00:04:55.852 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.852 01:39:41 -- setup/hugepages.sh@100 -- # resv=0 00:04:55.852 01:39:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:55.852 nr_hugepages=1024 00:04:55.852 01:39:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:55.852 resv_hugepages=0 00:04:55.852 01:39:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:55.852 surplus_hugepages=0 00:04:55.852 01:39:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:55.852 anon_hugepages=0 00:04:55.852 01:39:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.852 01:39:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:55.852 01:39:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:55.852 01:39:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:55.852 01:39:41 -- setup/common.sh@18 -- # local node= 00:04:55.852 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.852 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.852 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.852 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:55.852 01:39:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:55.852 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.852 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35821144 kB' 'MemAvailable: 39532844 kB' 'Buffers: 2696 kB' 'Cached: 20186984 kB' 'SwapCached: 0 kB' 'Active: 17166172 kB' 'Inactive: 3504240 kB' 'Active(anon): 16580040 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483988 kB' 'Mapped: 229052 kB' 'Shmem: 16099308 kB' 'KReclaimable: 229904 kB' 'Slab: 621152 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391248 kB' 'KernelStack: 12976 kB' 'PageTables: 8784 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17719916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.852 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.852 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.853 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.853 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:55.854 01:39:41 -- setup/common.sh@33 -- # echo 1024 00:04:55.854 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.854 01:39:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:55.854 01:39:41 -- setup/hugepages.sh@112 -- # get_nodes 00:04:55.854 01:39:41 -- setup/hugepages.sh@27 -- # local node 00:04:55.854 01:39:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.854 01:39:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.854 01:39:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:55.854 01:39:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:55.854 01:39:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:55.854 01:39:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:55.854 01:39:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.854 01:39:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.854 01:39:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:55.854 01:39:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.854 01:39:41 -- setup/common.sh@18 -- # local node=0 00:04:55.854 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.854 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.854 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.854 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:55.854 01:39:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:55.854 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.854 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17483656 kB' 'MemUsed: 15346228 kB' 'SwapCached: 0 kB' 'Active: 9957476 kB' 'Inactive: 3323444 kB' 'Active(anon): 9578164 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991772 kB' 'Mapped: 126156 kB' 'AnonPages: 292292 kB' 'Shmem: 9289016 kB' 'KernelStack: 7112 kB' 'PageTables: 4428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316832 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.854 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.854 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@33 -- # echo 0 00:04:55.855 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.855 01:39:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.855 01:39:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:55.855 01:39:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:55.855 01:39:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:55.855 01:39:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:55.855 01:39:41 -- setup/common.sh@18 -- # local node=1 00:04:55.855 01:39:41 -- setup/common.sh@19 -- # local var val 00:04:55.855 01:39:41 -- setup/common.sh@20 -- # local mem_f mem 00:04:55.855 01:39:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:55.855 01:39:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:55.855 01:39:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:55.855 01:39:41 -- setup/common.sh@28 -- # mapfile -t mem 00:04:55.855 01:39:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18338484 kB' 'MemUsed: 9373352 kB' 'SwapCached: 0 kB' 'Active: 7208996 kB' 'Inactive: 180796 kB' 'Active(anon): 7002176 kB' 'Inactive(anon): 0 kB' 'Active(file): 206820 kB' 'Inactive(file): 180796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7197944 kB' 'Mapped: 102896 kB' 'AnonPages: 191980 kB' 'Shmem: 6810328 kB' 'KernelStack: 5864 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109720 kB' 'Slab: 304320 kB' 'SReclaimable: 109720 kB' 'SUnreclaim: 194600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.855 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.855 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # continue 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # IFS=': ' 00:04:55.856 01:39:41 -- setup/common.sh@31 -- # read -r var val _ 00:04:55.856 01:39:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:55.856 01:39:41 -- setup/common.sh@33 -- # echo 0 00:04:55.856 01:39:41 -- setup/common.sh@33 -- # return 0 00:04:55.856 01:39:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:55.856 01:39:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.856 01:39:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.856 01:39:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.856 01:39:41 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:55.856 node0=512 expecting 512 00:04:55.856 01:39:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:55.856 01:39:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:55.856 01:39:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:55.856 01:39:41 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:55.856 node1=512 expecting 512 00:04:55.856 01:39:41 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:55.856 00:04:55.856 real 0m1.359s 00:04:55.856 user 0m0.568s 00:04:55.856 sys 0m0.751s 00:04:55.856 01:39:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.856 01:39:41 -- common/autotest_common.sh@10 -- # set +x 00:04:55.856 ************************************ 00:04:55.856 END TEST even_2G_alloc 00:04:55.856 ************************************ 00:04:56.114 01:39:41 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:56.114 01:39:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:56.114 01:39:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:56.114 01:39:41 -- common/autotest_common.sh@10 -- # set +x 00:04:56.114 ************************************ 00:04:56.114 START TEST odd_alloc 00:04:56.114 ************************************ 00:04:56.114 01:39:41 -- common/autotest_common.sh@1104 -- # odd_alloc 00:04:56.114 01:39:41 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:56.115 01:39:41 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:56.115 01:39:41 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:56.115 01:39:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:56.115 01:39:41 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:56.115 01:39:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:56.115 01:39:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:56.115 01:39:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:56.115 01:39:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:56.115 01:39:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:56.115 01:39:41 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:56.115 01:39:41 -- setup/hugepages.sh@83 -- # : 513 00:04:56.115 01:39:41 -- setup/hugepages.sh@84 -- # : 1 00:04:56.115 01:39:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:56.115 01:39:41 -- setup/hugepages.sh@83 -- # : 0 00:04:56.115 01:39:41 -- setup/hugepages.sh@84 -- # : 0 00:04:56.115 01:39:41 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:56.115 01:39:41 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:56.115 01:39:41 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:56.115 01:39:41 -- setup/hugepages.sh@160 -- # setup output 00:04:56.115 01:39:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:56.115 01:39:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:57.050 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.050 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:57.050 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.050 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.050 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.050 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.050 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.050 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.050 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.050 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:57.050 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:57.050 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:57.050 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:57.050 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:57.050 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:57.050 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:57.050 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:57.314 01:39:42 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:57.314 01:39:42 -- setup/hugepages.sh@89 -- # local node 00:04:57.314 01:39:42 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:57.314 01:39:42 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:57.314 01:39:42 -- setup/hugepages.sh@92 -- # local surp 00:04:57.314 01:39:42 -- setup/hugepages.sh@93 -- # local resv 00:04:57.314 01:39:42 -- setup/hugepages.sh@94 -- # local anon 00:04:57.314 01:39:42 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:57.314 01:39:42 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:57.314 01:39:42 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:57.314 01:39:42 -- setup/common.sh@18 -- # local node= 00:04:57.314 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.314 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.314 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.314 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.314 01:39:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.314 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.314 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35840092 kB' 'MemAvailable: 39551792 kB' 'Buffers: 2696 kB' 'Cached: 20187052 kB' 'SwapCached: 0 kB' 'Active: 17168440 kB' 'Inactive: 3504240 kB' 'Active(anon): 16582308 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486280 kB' 'Mapped: 228536 kB' 'Shmem: 16099376 kB' 'KReclaimable: 229904 kB' 'Slab: 621168 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391264 kB' 'KernelStack: 12944 kB' 'PageTables: 8608 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 17712000 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197216 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.314 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.314 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:57.315 01:39:42 -- setup/common.sh@33 -- # echo 0 00:04:57.315 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.315 01:39:42 -- setup/hugepages.sh@97 -- # anon=0 00:04:57.315 01:39:42 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:57.315 01:39:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.315 01:39:42 -- setup/common.sh@18 -- # local node= 00:04:57.315 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.315 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.315 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.315 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.315 01:39:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.315 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.315 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35839616 kB' 'MemAvailable: 39551316 kB' 'Buffers: 2696 kB' 'Cached: 20187052 kB' 'SwapCached: 0 kB' 'Active: 17169588 kB' 'Inactive: 3504240 kB' 'Active(anon): 16583456 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487496 kB' 'Mapped: 228972 kB' 'Shmem: 16099376 kB' 'KReclaimable: 229904 kB' 'Slab: 621184 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391280 kB' 'KernelStack: 12944 kB' 'PageTables: 8628 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 17712012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197168 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.315 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.315 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.316 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.316 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.317 01:39:42 -- setup/common.sh@33 -- # echo 0 00:04:57.317 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.317 01:39:42 -- setup/hugepages.sh@99 -- # surp=0 00:04:57.317 01:39:42 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:57.317 01:39:42 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:57.317 01:39:42 -- setup/common.sh@18 -- # local node= 00:04:57.317 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.317 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.317 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.317 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.317 01:39:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.317 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.317 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35840268 kB' 'MemAvailable: 39551968 kB' 'Buffers: 2696 kB' 'Cached: 20187068 kB' 'SwapCached: 0 kB' 'Active: 17163236 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577104 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481048 kB' 'Mapped: 228020 kB' 'Shmem: 16099392 kB' 'KReclaimable: 229904 kB' 'Slab: 621168 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391264 kB' 'KernelStack: 12880 kB' 'PageTables: 8400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 17705908 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.317 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.317 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:57.318 01:39:42 -- setup/common.sh@33 -- # echo 0 00:04:57.318 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.318 01:39:42 -- setup/hugepages.sh@100 -- # resv=0 00:04:57.318 01:39:42 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:57.318 nr_hugepages=1025 00:04:57.318 01:39:42 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:57.318 resv_hugepages=0 00:04:57.318 01:39:42 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:57.318 surplus_hugepages=0 00:04:57.318 01:39:42 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:57.318 anon_hugepages=0 00:04:57.318 01:39:42 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.318 01:39:42 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:57.318 01:39:42 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:57.318 01:39:42 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:57.318 01:39:42 -- setup/common.sh@18 -- # local node= 00:04:57.318 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.318 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.318 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.318 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:57.318 01:39:42 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:57.318 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.318 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35841020 kB' 'MemAvailable: 39552720 kB' 'Buffers: 2696 kB' 'Cached: 20187080 kB' 'SwapCached: 0 kB' 'Active: 17163256 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577124 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481136 kB' 'Mapped: 227996 kB' 'Shmem: 16099404 kB' 'KReclaimable: 229904 kB' 'Slab: 621168 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391264 kB' 'KernelStack: 12912 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 17705920 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.318 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.318 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.319 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.319 01:39:42 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:57.320 01:39:42 -- setup/common.sh@33 -- # echo 1025 00:04:57.320 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.320 01:39:42 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:57.320 01:39:42 -- setup/hugepages.sh@112 -- # get_nodes 00:04:57.320 01:39:42 -- setup/hugepages.sh@27 -- # local node 00:04:57.320 01:39:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.320 01:39:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:57.320 01:39:42 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:57.320 01:39:42 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:57.320 01:39:42 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:57.320 01:39:42 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:57.320 01:39:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.320 01:39:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.320 01:39:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:57.320 01:39:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.320 01:39:42 -- setup/common.sh@18 -- # local node=0 00:04:57.320 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.320 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.320 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.320 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:57.320 01:39:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:57.320 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.320 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17506952 kB' 'MemUsed: 15322932 kB' 'SwapCached: 0 kB' 'Active: 9956056 kB' 'Inactive: 3323444 kB' 'Active(anon): 9576744 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991792 kB' 'Mapped: 125100 kB' 'AnonPages: 290968 kB' 'Shmem: 9289036 kB' 'KernelStack: 7128 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316876 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196692 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.320 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.320 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@33 -- # echo 0 00:04:57.321 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.321 01:39:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.321 01:39:42 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:57.321 01:39:42 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:57.321 01:39:42 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:57.321 01:39:42 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:57.321 01:39:42 -- setup/common.sh@18 -- # local node=1 00:04:57.321 01:39:42 -- setup/common.sh@19 -- # local var val 00:04:57.321 01:39:42 -- setup/common.sh@20 -- # local mem_f mem 00:04:57.321 01:39:42 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:57.321 01:39:42 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:57.321 01:39:42 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:57.321 01:39:42 -- setup/common.sh@28 -- # mapfile -t mem 00:04:57.321 01:39:42 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 18334292 kB' 'MemUsed: 9377544 kB' 'SwapCached: 0 kB' 'Active: 7207704 kB' 'Inactive: 180796 kB' 'Active(anon): 7000884 kB' 'Inactive(anon): 0 kB' 'Active(file): 206820 kB' 'Inactive(file): 180796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7198012 kB' 'Mapped: 102896 kB' 'AnonPages: 190684 kB' 'Shmem: 6810396 kB' 'KernelStack: 5784 kB' 'PageTables: 3964 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109720 kB' 'Slab: 304292 kB' 'SReclaimable: 109720 kB' 'SUnreclaim: 194572 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.321 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.321 01:39:42 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # continue 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # IFS=': ' 00:04:57.322 01:39:42 -- setup/common.sh@31 -- # read -r var val _ 00:04:57.322 01:39:42 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:57.322 01:39:42 -- setup/common.sh@33 -- # echo 0 00:04:57.322 01:39:42 -- setup/common.sh@33 -- # return 0 00:04:57.322 01:39:42 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:57.322 01:39:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.322 01:39:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.322 01:39:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.322 01:39:42 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:57.322 node0=512 expecting 513 00:04:57.322 01:39:42 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:57.322 01:39:42 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:57.322 01:39:42 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:57.322 01:39:42 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:57.322 node1=513 expecting 512 00:04:57.322 01:39:42 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:57.322 00:04:57.322 real 0m1.451s 00:04:57.322 user 0m0.588s 00:04:57.322 sys 0m0.821s 00:04:57.322 01:39:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.322 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.322 ************************************ 00:04:57.322 END TEST odd_alloc 00:04:57.322 ************************************ 00:04:57.581 01:39:42 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:57.581 01:39:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:57.581 01:39:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:57.581 01:39:42 -- common/autotest_common.sh@10 -- # set +x 00:04:57.581 ************************************ 00:04:57.581 START TEST custom_alloc 00:04:57.581 ************************************ 00:04:57.581 01:39:42 -- common/autotest_common.sh@1104 -- # custom_alloc 00:04:57.581 01:39:42 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:57.581 01:39:42 -- setup/hugepages.sh@169 -- # local node 00:04:57.581 01:39:42 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:57.581 01:39:42 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:57.581 01:39:42 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:57.581 01:39:42 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:57.581 01:39:42 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:57.581 01:39:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:57.581 01:39:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.581 01:39:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.581 01:39:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.581 01:39:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:57.581 01:39:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.581 01:39:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.581 01:39:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.581 01:39:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:57.581 01:39:42 -- setup/hugepages.sh@83 -- # : 256 00:04:57.581 01:39:42 -- setup/hugepages.sh@84 -- # : 1 00:04:57.581 01:39:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:57.581 01:39:42 -- setup/hugepages.sh@83 -- # : 0 00:04:57.581 01:39:42 -- setup/hugepages.sh@84 -- # : 0 00:04:57.581 01:39:42 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:57.581 01:39:42 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:57.581 01:39:42 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:57.581 01:39:42 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:57.581 01:39:42 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:57.581 01:39:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.581 01:39:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.581 01:39:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.581 01:39:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.581 01:39:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.581 01:39:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.581 01:39:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:57.581 01:39:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.581 01:39:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:57.581 01:39:42 -- setup/hugepages.sh@78 -- # return 0 00:04:57.581 01:39:42 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:57.581 01:39:42 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:57.581 01:39:42 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:57.581 01:39:42 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:57.582 01:39:42 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:57.582 01:39:42 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:57.582 01:39:42 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:57.582 01:39:42 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:57.582 01:39:42 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:57.582 01:39:42 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:57.582 01:39:42 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:57.582 01:39:42 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:57.582 01:39:42 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:57.582 01:39:42 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:57.582 01:39:42 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:57.582 01:39:42 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:57.582 01:39:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.582 01:39:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:57.582 01:39:42 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:57.582 01:39:42 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:57.582 01:39:42 -- setup/hugepages.sh@78 -- # return 0 00:04:57.582 01:39:42 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:57.582 01:39:42 -- setup/hugepages.sh@187 -- # setup output 00:04:57.582 01:39:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.582 01:39:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:58.515 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.515 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:58.515 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.515 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.515 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.515 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.776 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.776 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.776 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.776 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:58.776 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:58.776 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:58.776 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:58.776 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:58.776 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:58.776 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:58.776 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:58.776 01:39:44 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:58.776 01:39:44 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:58.776 01:39:44 -- setup/hugepages.sh@89 -- # local node 00:04:58.776 01:39:44 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:58.776 01:39:44 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:58.776 01:39:44 -- setup/hugepages.sh@92 -- # local surp 00:04:58.776 01:39:44 -- setup/hugepages.sh@93 -- # local resv 00:04:58.776 01:39:44 -- setup/hugepages.sh@94 -- # local anon 00:04:58.776 01:39:44 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:58.776 01:39:44 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:58.776 01:39:44 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:58.776 01:39:44 -- setup/common.sh@18 -- # local node= 00:04:58.776 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:58.776 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.776 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.776 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.776 01:39:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.776 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.776 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 34777720 kB' 'MemAvailable: 38489420 kB' 'Buffers: 2696 kB' 'Cached: 20187152 kB' 'SwapCached: 0 kB' 'Active: 17163980 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577848 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481528 kB' 'Mapped: 228008 kB' 'Shmem: 16099476 kB' 'KReclaimable: 229904 kB' 'Slab: 620972 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391068 kB' 'KernelStack: 12944 kB' 'PageTables: 8520 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 17706280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197276 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.776 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.776 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:58.777 01:39:44 -- setup/common.sh@33 -- # echo 0 00:04:58.777 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:58.777 01:39:44 -- setup/hugepages.sh@97 -- # anon=0 00:04:58.777 01:39:44 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:58.777 01:39:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:58.777 01:39:44 -- setup/common.sh@18 -- # local node= 00:04:58.777 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:58.777 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.777 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.777 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.777 01:39:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.777 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.777 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 34778156 kB' 'MemAvailable: 38489856 kB' 'Buffers: 2696 kB' 'Cached: 20187164 kB' 'SwapCached: 0 kB' 'Active: 17164224 kB' 'Inactive: 3504240 kB' 'Active(anon): 16578092 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481808 kB' 'Mapped: 228008 kB' 'Shmem: 16099488 kB' 'KReclaimable: 229904 kB' 'Slab: 620956 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391052 kB' 'KernelStack: 12912 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 17706292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.777 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.777 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.778 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.778 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:58.778 01:39:44 -- setup/common.sh@33 -- # echo 0 00:04:58.778 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:58.779 01:39:44 -- setup/hugepages.sh@99 -- # surp=0 00:04:58.779 01:39:44 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:58.779 01:39:44 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:58.779 01:39:44 -- setup/common.sh@18 -- # local node= 00:04:58.779 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:58.779 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:58.779 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:58.779 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:58.779 01:39:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:58.779 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:58.779 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 34778156 kB' 'MemAvailable: 38489856 kB' 'Buffers: 2696 kB' 'Cached: 20187164 kB' 'SwapCached: 0 kB' 'Active: 17164224 kB' 'Inactive: 3504240 kB' 'Active(anon): 16578092 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481816 kB' 'Mapped: 228004 kB' 'Shmem: 16099488 kB' 'KReclaimable: 229904 kB' 'Slab: 620956 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391052 kB' 'KernelStack: 12928 kB' 'PageTables: 8456 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 17706308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.779 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.779 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # continue 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:58.780 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:58.780 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:58.780 01:39:44 -- setup/common.sh@33 -- # echo 0 00:04:58.780 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:59.041 01:39:44 -- setup/hugepages.sh@100 -- # resv=0 00:04:59.041 01:39:44 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:59.041 nr_hugepages=1536 00:04:59.041 01:39:44 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:59.041 resv_hugepages=0 00:04:59.041 01:39:44 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:59.041 surplus_hugepages=0 00:04:59.041 01:39:44 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:59.041 anon_hugepages=0 00:04:59.041 01:39:44 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.041 01:39:44 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:59.041 01:39:44 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:59.041 01:39:44 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:59.041 01:39:44 -- setup/common.sh@18 -- # local node= 00:04:59.041 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:59.041 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.041 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.041 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:59.041 01:39:44 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:59.041 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.041 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 34785180 kB' 'MemAvailable: 38496880 kB' 'Buffers: 2696 kB' 'Cached: 20187180 kB' 'SwapCached: 0 kB' 'Active: 17163544 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577412 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481120 kB' 'Mapped: 228004 kB' 'Shmem: 16099504 kB' 'KReclaimable: 229904 kB' 'Slab: 620988 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391084 kB' 'KernelStack: 12880 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 17706320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.041 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.041 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.042 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:59.042 01:39:44 -- setup/common.sh@33 -- # echo 1536 00:04:59.042 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:59.042 01:39:44 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:59.042 01:39:44 -- setup/hugepages.sh@112 -- # get_nodes 00:04:59.042 01:39:44 -- setup/hugepages.sh@27 -- # local node 00:04:59.042 01:39:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.042 01:39:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:59.042 01:39:44 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:59.042 01:39:44 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:59.042 01:39:44 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:59.042 01:39:44 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:59.042 01:39:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.042 01:39:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.042 01:39:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:59.042 01:39:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.042 01:39:44 -- setup/common.sh@18 -- # local node=0 00:04:59.042 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:59.042 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.042 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.042 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:59.042 01:39:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:59.042 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.042 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.042 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 17500104 kB' 'MemUsed: 15329780 kB' 'SwapCached: 0 kB' 'Active: 9955856 kB' 'Inactive: 3323444 kB' 'Active(anon): 9576544 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991796 kB' 'Mapped: 125108 kB' 'AnonPages: 290672 kB' 'Shmem: 9289040 kB' 'KernelStack: 7112 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316672 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196488 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.043 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.043 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@33 -- # echo 0 00:04:59.044 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:59.044 01:39:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.044 01:39:44 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:59.044 01:39:44 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:59.044 01:39:44 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:59.044 01:39:44 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:59.044 01:39:44 -- setup/common.sh@18 -- # local node=1 00:04:59.044 01:39:44 -- setup/common.sh@19 -- # local var val 00:04:59.044 01:39:44 -- setup/common.sh@20 -- # local mem_f mem 00:04:59.044 01:39:44 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:59.044 01:39:44 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:59.044 01:39:44 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:59.044 01:39:44 -- setup/common.sh@28 -- # mapfile -t mem 00:04:59.044 01:39:44 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711836 kB' 'MemFree: 17286768 kB' 'MemUsed: 10425068 kB' 'SwapCached: 0 kB' 'Active: 7207936 kB' 'Inactive: 180796 kB' 'Active(anon): 7001116 kB' 'Inactive(anon): 0 kB' 'Active(file): 206820 kB' 'Inactive(file): 180796 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 7198108 kB' 'Mapped: 102896 kB' 'AnonPages: 190740 kB' 'Shmem: 6810492 kB' 'KernelStack: 5832 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 109720 kB' 'Slab: 304316 kB' 'SReclaimable: 109720 kB' 'SUnreclaim: 194596 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.044 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.044 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # continue 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # IFS=': ' 00:04:59.045 01:39:44 -- setup/common.sh@31 -- # read -r var val _ 00:04:59.045 01:39:44 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:59.045 01:39:44 -- setup/common.sh@33 -- # echo 0 00:04:59.045 01:39:44 -- setup/common.sh@33 -- # return 0 00:04:59.045 01:39:44 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:59.045 01:39:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.045 01:39:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.045 01:39:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.045 01:39:44 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:59.045 node0=512 expecting 512 00:04:59.045 01:39:44 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:59.045 01:39:44 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:59.045 01:39:44 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:59.045 01:39:44 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:59.045 node1=1024 expecting 1024 00:04:59.045 01:39:44 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:59.045 00:04:59.045 real 0m1.516s 00:04:59.045 user 0m0.660s 00:04:59.045 sys 0m0.818s 00:04:59.045 01:39:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.045 01:39:44 -- common/autotest_common.sh@10 -- # set +x 00:04:59.045 ************************************ 00:04:59.045 END TEST custom_alloc 00:04:59.045 ************************************ 00:04:59.045 01:39:44 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:59.045 01:39:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:59.045 01:39:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:59.045 01:39:44 -- common/autotest_common.sh@10 -- # set +x 00:04:59.045 ************************************ 00:04:59.045 START TEST no_shrink_alloc 00:04:59.045 ************************************ 00:04:59.045 01:39:44 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:04:59.045 01:39:44 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:59.045 01:39:44 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:59.045 01:39:44 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:59.045 01:39:44 -- setup/hugepages.sh@51 -- # shift 00:04:59.045 01:39:44 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:59.045 01:39:44 -- setup/hugepages.sh@52 -- # local node_ids 00:04:59.045 01:39:44 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:59.045 01:39:44 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:59.045 01:39:44 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:59.045 01:39:44 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:59.045 01:39:44 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:59.045 01:39:44 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:59.045 01:39:44 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:59.045 01:39:44 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:59.045 01:39:44 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:59.045 01:39:44 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:59.045 01:39:44 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:59.045 01:39:44 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:59.045 01:39:44 -- setup/hugepages.sh@73 -- # return 0 00:04:59.045 01:39:44 -- setup/hugepages.sh@198 -- # setup output 00:04:59.045 01:39:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:59.045 01:39:44 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:59.980 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:59.980 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:59.980 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:59.980 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:59.980 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.240 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.240 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.240 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.240 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.240 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:00.240 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:00.240 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:00.240 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:00.240 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:00.240 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:00.240 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:00.240 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:00.240 01:39:45 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:00.240 01:39:45 -- setup/hugepages.sh@89 -- # local node 00:05:00.240 01:39:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:00.240 01:39:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:00.240 01:39:45 -- setup/hugepages.sh@92 -- # local surp 00:05:00.240 01:39:45 -- setup/hugepages.sh@93 -- # local resv 00:05:00.240 01:39:45 -- setup/hugepages.sh@94 -- # local anon 00:05:00.240 01:39:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:00.240 01:39:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:00.240 01:39:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:00.240 01:39:45 -- setup/common.sh@18 -- # local node= 00:05:00.240 01:39:45 -- setup/common.sh@19 -- # local var val 00:05:00.240 01:39:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.240 01:39:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.240 01:39:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.240 01:39:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.240 01:39:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.240 01:39:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.240 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.240 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.240 01:39:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35822612 kB' 'MemAvailable: 39534312 kB' 'Buffers: 2696 kB' 'Cached: 20187244 kB' 'SwapCached: 0 kB' 'Active: 17163604 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577472 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481264 kB' 'Mapped: 228008 kB' 'Shmem: 16099568 kB' 'KReclaimable: 229904 kB' 'Slab: 620880 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390976 kB' 'KernelStack: 12928 kB' 'PageTables: 8480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17706672 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197196 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:00.240 01:39:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.240 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.240 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.240 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.240 01:39:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.240 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.241 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.241 01:39:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:00.241 01:39:45 -- setup/common.sh@33 -- # echo 0 00:05:00.241 01:39:45 -- setup/common.sh@33 -- # return 0 00:05:00.241 01:39:45 -- setup/hugepages.sh@97 -- # anon=0 00:05:00.242 01:39:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:00.242 01:39:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.242 01:39:45 -- setup/common.sh@18 -- # local node= 00:05:00.242 01:39:45 -- setup/common.sh@19 -- # local var val 00:05:00.242 01:39:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.242 01:39:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.242 01:39:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.242 01:39:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.242 01:39:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.242 01:39:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35841120 kB' 'MemAvailable: 39552820 kB' 'Buffers: 2696 kB' 'Cached: 20187244 kB' 'SwapCached: 0 kB' 'Active: 17164284 kB' 'Inactive: 3504240 kB' 'Active(anon): 16578152 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481988 kB' 'Mapped: 228008 kB' 'Shmem: 16099568 kB' 'KReclaimable: 229904 kB' 'Slab: 620864 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390960 kB' 'KernelStack: 12928 kB' 'PageTables: 8468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17706684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.242 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.242 01:39:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.243 01:39:45 -- setup/common.sh@33 -- # echo 0 00:05:00.243 01:39:45 -- setup/common.sh@33 -- # return 0 00:05:00.243 01:39:45 -- setup/hugepages.sh@99 -- # surp=0 00:05:00.243 01:39:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:00.243 01:39:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:00.243 01:39:45 -- setup/common.sh@18 -- # local node= 00:05:00.243 01:39:45 -- setup/common.sh@19 -- # local var val 00:05:00.243 01:39:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.243 01:39:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.243 01:39:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.243 01:39:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.243 01:39:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.243 01:39:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35840888 kB' 'MemAvailable: 39552588 kB' 'Buffers: 2696 kB' 'Cached: 20187256 kB' 'SwapCached: 0 kB' 'Active: 17162996 kB' 'Inactive: 3504240 kB' 'Active(anon): 16576864 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480620 kB' 'Mapped: 228008 kB' 'Shmem: 16099580 kB' 'KReclaimable: 229904 kB' 'Slab: 620932 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391028 kB' 'KernelStack: 12896 kB' 'PageTables: 8372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17706700 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.243 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.243 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.244 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.244 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.503 01:39:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:00.504 01:39:45 -- setup/common.sh@33 -- # echo 0 00:05:00.504 01:39:45 -- setup/common.sh@33 -- # return 0 00:05:00.504 01:39:45 -- setup/hugepages.sh@100 -- # resv=0 00:05:00.504 01:39:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:00.504 nr_hugepages=1024 00:05:00.504 01:39:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:00.504 resv_hugepages=0 00:05:00.504 01:39:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:00.504 surplus_hugepages=0 00:05:00.504 01:39:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:00.504 anon_hugepages=0 00:05:00.504 01:39:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.504 01:39:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:00.504 01:39:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:00.504 01:39:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:00.504 01:39:45 -- setup/common.sh@18 -- # local node= 00:05:00.504 01:39:45 -- setup/common.sh@19 -- # local var val 00:05:00.504 01:39:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.504 01:39:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.504 01:39:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:00.504 01:39:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:00.504 01:39:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.504 01:39:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35840864 kB' 'MemAvailable: 39552564 kB' 'Buffers: 2696 kB' 'Cached: 20187272 kB' 'SwapCached: 0 kB' 'Active: 17163368 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577236 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481028 kB' 'Mapped: 228008 kB' 'Shmem: 16099596 kB' 'KReclaimable: 229904 kB' 'Slab: 620924 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 391020 kB' 'KernelStack: 12928 kB' 'PageTables: 8476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17706712 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197164 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.504 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.504 01:39:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.505 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.505 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:00.505 01:39:45 -- setup/common.sh@33 -- # echo 1024 00:05:00.505 01:39:45 -- setup/common.sh@33 -- # return 0 00:05:00.505 01:39:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:00.505 01:39:45 -- setup/hugepages.sh@112 -- # get_nodes 00:05:00.505 01:39:45 -- setup/hugepages.sh@27 -- # local node 00:05:00.505 01:39:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.505 01:39:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:00.505 01:39:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:00.505 01:39:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:00.505 01:39:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:00.505 01:39:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:00.505 01:39:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:00.505 01:39:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:00.505 01:39:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:00.505 01:39:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:00.505 01:39:45 -- setup/common.sh@18 -- # local node=0 00:05:00.505 01:39:45 -- setup/common.sh@19 -- # local var val 00:05:00.505 01:39:45 -- setup/common.sh@20 -- # local mem_f mem 00:05:00.505 01:39:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:00.506 01:39:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:00.506 01:39:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:00.506 01:39:45 -- setup/common.sh@28 -- # mapfile -t mem 00:05:00.506 01:39:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16455588 kB' 'MemUsed: 16374296 kB' 'SwapCached: 0 kB' 'Active: 9956280 kB' 'Inactive: 3323444 kB' 'Active(anon): 9576968 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991800 kB' 'Mapped: 125112 kB' 'AnonPages: 291108 kB' 'Shmem: 9289044 kB' 'KernelStack: 7096 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316696 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196512 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.506 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.506 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.507 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.507 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.507 01:39:45 -- setup/common.sh@32 -- # continue 00:05:00.507 01:39:45 -- setup/common.sh@31 -- # IFS=': ' 00:05:00.507 01:39:45 -- setup/common.sh@31 -- # read -r var val _ 00:05:00.507 01:39:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:00.507 01:39:45 -- setup/common.sh@33 -- # echo 0 00:05:00.507 01:39:45 -- setup/common.sh@33 -- # return 0 00:05:00.507 01:39:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:00.507 01:39:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:00.507 01:39:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:00.507 01:39:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:00.507 01:39:45 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:00.507 node0=1024 expecting 1024 00:05:00.507 01:39:45 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:00.507 01:39:45 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:00.507 01:39:45 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:00.507 01:39:45 -- setup/hugepages.sh@202 -- # setup output 00:05:00.507 01:39:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.507 01:39:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:01.442 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.442 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:05:01.442 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.442 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.442 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.442 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.705 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.705 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.705 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.705 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:05:01.705 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:05:01.705 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:05:01.705 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:05:01.705 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:05:01.705 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:05:01.705 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:05:01.705 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:05:01.705 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:01.705 01:39:47 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:01.705 01:39:47 -- setup/hugepages.sh@89 -- # local node 00:05:01.705 01:39:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:01.705 01:39:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:01.705 01:39:47 -- setup/hugepages.sh@92 -- # local surp 00:05:01.705 01:39:47 -- setup/hugepages.sh@93 -- # local resv 00:05:01.705 01:39:47 -- setup/hugepages.sh@94 -- # local anon 00:05:01.705 01:39:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:01.705 01:39:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:01.705 01:39:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:01.705 01:39:47 -- setup/common.sh@18 -- # local node= 00:05:01.705 01:39:47 -- setup/common.sh@19 -- # local var val 00:05:01.705 01:39:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.705 01:39:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.705 01:39:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.705 01:39:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.705 01:39:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.705 01:39:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35828096 kB' 'MemAvailable: 39539796 kB' 'Buffers: 2696 kB' 'Cached: 20187324 kB' 'SwapCached: 0 kB' 'Active: 17163916 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577784 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481304 kB' 'Mapped: 228088 kB' 'Shmem: 16099648 kB' 'KReclaimable: 229904 kB' 'Slab: 620772 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390868 kB' 'KernelStack: 12896 kB' 'PageTables: 8408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17706864 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197292 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.705 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.705 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:01.706 01:39:47 -- setup/common.sh@33 -- # echo 0 00:05:01.706 01:39:47 -- setup/common.sh@33 -- # return 0 00:05:01.706 01:39:47 -- setup/hugepages.sh@97 -- # anon=0 00:05:01.706 01:39:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:01.706 01:39:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.706 01:39:47 -- setup/common.sh@18 -- # local node= 00:05:01.706 01:39:47 -- setup/common.sh@19 -- # local var val 00:05:01.706 01:39:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.706 01:39:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.706 01:39:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.706 01:39:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.706 01:39:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.706 01:39:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35834656 kB' 'MemAvailable: 39546356 kB' 'Buffers: 2696 kB' 'Cached: 20187328 kB' 'SwapCached: 0 kB' 'Active: 17164044 kB' 'Inactive: 3504240 kB' 'Active(anon): 16577912 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481404 kB' 'Mapped: 228448 kB' 'Shmem: 16099652 kB' 'KReclaimable: 229904 kB' 'Slab: 620748 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390844 kB' 'KernelStack: 12912 kB' 'PageTables: 8428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17708100 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197260 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.706 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.706 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.707 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.707 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.708 01:39:47 -- setup/common.sh@33 -- # echo 0 00:05:01.708 01:39:47 -- setup/common.sh@33 -- # return 0 00:05:01.708 01:39:47 -- setup/hugepages.sh@99 -- # surp=0 00:05:01.708 01:39:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:01.708 01:39:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:01.708 01:39:47 -- setup/common.sh@18 -- # local node= 00:05:01.708 01:39:47 -- setup/common.sh@19 -- # local var val 00:05:01.708 01:39:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.708 01:39:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.708 01:39:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.708 01:39:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.708 01:39:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.708 01:39:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35834304 kB' 'MemAvailable: 39546004 kB' 'Buffers: 2696 kB' 'Cached: 20187340 kB' 'SwapCached: 0 kB' 'Active: 17167608 kB' 'Inactive: 3504240 kB' 'Active(anon): 16581476 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484984 kB' 'Mapped: 228448 kB' 'Shmem: 16099664 kB' 'KReclaimable: 229904 kB' 'Slab: 620748 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390844 kB' 'KernelStack: 12912 kB' 'PageTables: 8424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17711548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197244 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.708 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.708 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:01.709 01:39:47 -- setup/common.sh@33 -- # echo 0 00:05:01.709 01:39:47 -- setup/common.sh@33 -- # return 0 00:05:01.709 01:39:47 -- setup/hugepages.sh@100 -- # resv=0 00:05:01.709 01:39:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:01.709 nr_hugepages=1024 00:05:01.709 01:39:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:01.709 resv_hugepages=0 00:05:01.709 01:39:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:01.709 surplus_hugepages=0 00:05:01.709 01:39:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:01.709 anon_hugepages=0 00:05:01.709 01:39:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.709 01:39:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:01.709 01:39:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:01.709 01:39:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:01.709 01:39:47 -- setup/common.sh@18 -- # local node= 00:05:01.709 01:39:47 -- setup/common.sh@19 -- # local var val 00:05:01.709 01:39:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.709 01:39:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.709 01:39:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:01.709 01:39:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:01.709 01:39:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.709 01:39:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.709 01:39:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541720 kB' 'MemFree: 35834576 kB' 'MemAvailable: 39546276 kB' 'Buffers: 2696 kB' 'Cached: 20187352 kB' 'SwapCached: 0 kB' 'Active: 17169332 kB' 'Inactive: 3504240 kB' 'Active(anon): 16583200 kB' 'Inactive(anon): 0 kB' 'Active(file): 586132 kB' 'Inactive(file): 3504240 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 486724 kB' 'Mapped: 228768 kB' 'Shmem: 16099676 kB' 'KReclaimable: 229904 kB' 'Slab: 620748 kB' 'SReclaimable: 229904 kB' 'SUnreclaim: 390844 kB' 'KernelStack: 12928 kB' 'PageTables: 8496 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 17713024 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 197264 kB' 'VmallocChunk: 0 kB' 'Percpu: 41280 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2809436 kB' 'DirectMap2M: 21227520 kB' 'DirectMap1G: 45088768 kB' 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.709 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.709 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.710 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.710 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.970 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.970 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:01.971 01:39:47 -- setup/common.sh@33 -- # echo 1024 00:05:01.971 01:39:47 -- setup/common.sh@33 -- # return 0 00:05:01.971 01:39:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:01.971 01:39:47 -- setup/hugepages.sh@112 -- # get_nodes 00:05:01.971 01:39:47 -- setup/hugepages.sh@27 -- # local node 00:05:01.971 01:39:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.971 01:39:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:01.971 01:39:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:01.971 01:39:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:05:01.971 01:39:47 -- setup/hugepages.sh@32 -- # no_nodes=2 00:05:01.971 01:39:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:01.971 01:39:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:01.971 01:39:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:01.971 01:39:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:01.971 01:39:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:01.971 01:39:47 -- setup/common.sh@18 -- # local node=0 00:05:01.971 01:39:47 -- setup/common.sh@19 -- # local var val 00:05:01.971 01:39:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:01.971 01:39:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:01.971 01:39:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:01.971 01:39:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:01.971 01:39:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:01.971 01:39:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 16451784 kB' 'MemUsed: 16378100 kB' 'SwapCached: 0 kB' 'Active: 9957124 kB' 'Inactive: 3323444 kB' 'Active(anon): 9577812 kB' 'Inactive(anon): 0 kB' 'Active(file): 379312 kB' 'Inactive(file): 3323444 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12991804 kB' 'Mapped: 125116 kB' 'AnonPages: 291896 kB' 'Shmem: 9289048 kB' 'KernelStack: 7112 kB' 'PageTables: 4468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 120184 kB' 'Slab: 316608 kB' 'SReclaimable: 120184 kB' 'SUnreclaim: 196424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.971 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.971 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # continue 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:01.972 01:39:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:01.972 01:39:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:01.972 01:39:47 -- setup/common.sh@33 -- # echo 0 00:05:01.972 01:39:47 -- setup/common.sh@33 -- # return 0 00:05:01.972 01:39:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:01.972 01:39:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:01.972 01:39:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:01.972 01:39:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:01.972 01:39:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:01.972 node0=1024 expecting 1024 00:05:01.972 01:39:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:01.972 00:05:01.972 real 0m2.864s 00:05:01.972 user 0m1.168s 00:05:01.972 sys 0m1.615s 00:05:01.972 01:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.972 01:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:01.972 ************************************ 00:05:01.972 END TEST no_shrink_alloc 00:05:01.972 ************************************ 00:05:01.972 01:39:47 -- setup/hugepages.sh@217 -- # clear_hp 00:05:01.972 01:39:47 -- setup/hugepages.sh@37 -- # local node hp 00:05:01.972 01:39:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.972 01:39:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.972 01:39:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.972 01:39:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.972 01:39:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.972 01:39:47 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:01.972 01:39:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.972 01:39:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.972 01:39:47 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:01.972 01:39:47 -- setup/hugepages.sh@41 -- # echo 0 00:05:01.972 01:39:47 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:01.972 01:39:47 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:01.972 00:05:01.972 real 0m11.370s 00:05:01.972 user 0m4.334s 00:05:01.972 sys 0m5.940s 00:05:01.972 01:39:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.972 01:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:01.972 ************************************ 00:05:01.972 END TEST hugepages 00:05:01.972 ************************************ 00:05:01.972 01:39:47 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:01.972 01:39:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:01.972 01:39:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:01.972 01:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:01.972 ************************************ 00:05:01.972 START TEST driver 00:05:01.972 ************************************ 00:05:01.972 01:39:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:05:01.972 * Looking for test storage... 00:05:01.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:01.972 01:39:47 -- setup/driver.sh@68 -- # setup reset 00:05:01.972 01:39:47 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:01.973 01:39:47 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:04.505 01:39:50 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:04.505 01:39:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.505 01:39:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.505 01:39:50 -- common/autotest_common.sh@10 -- # set +x 00:05:04.505 ************************************ 00:05:04.505 START TEST guess_driver 00:05:04.505 ************************************ 00:05:04.505 01:39:50 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:04.505 01:39:50 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:04.505 01:39:50 -- setup/driver.sh@47 -- # local fail=0 00:05:04.505 01:39:50 -- setup/driver.sh@49 -- # pick_driver 00:05:04.505 01:39:50 -- setup/driver.sh@36 -- # vfio 00:05:04.505 01:39:50 -- setup/driver.sh@21 -- # local iommu_grups 00:05:04.505 01:39:50 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:04.505 01:39:50 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:04.505 01:39:50 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:05:04.505 01:39:50 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:04.505 01:39:50 -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:05:04.505 01:39:50 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:05:04.505 01:39:50 -- setup/driver.sh@14 -- # mod vfio_pci 00:05:04.505 01:39:50 -- setup/driver.sh@12 -- # dep vfio_pci 00:05:04.505 01:39:50 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:05:04.505 01:39:50 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:05:04.505 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:05:04.505 01:39:50 -- setup/driver.sh@30 -- # return 0 00:05:04.505 01:39:50 -- setup/driver.sh@37 -- # echo vfio-pci 00:05:04.505 01:39:50 -- setup/driver.sh@49 -- # driver=vfio-pci 00:05:04.505 01:39:50 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:04.505 01:39:50 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:05:04.505 Looking for driver=vfio-pci 00:05:04.505 01:39:50 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:04.505 01:39:50 -- setup/driver.sh@45 -- # setup output config 00:05:04.505 01:39:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.505 01:39:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:05.883 01:39:51 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:05.883 01:39:51 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:05.883 01:39:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.821 01:39:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:06.821 01:39:52 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:06.821 01:39:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:06.821 01:39:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:06.821 01:39:52 -- setup/driver.sh@65 -- # setup reset 00:05:06.821 01:39:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:06.821 01:39:52 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.353 00:05:09.353 real 0m4.841s 00:05:09.353 user 0m1.094s 00:05:09.353 sys 0m1.912s 00:05:09.353 01:39:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.353 01:39:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.353 ************************************ 00:05:09.353 END TEST guess_driver 00:05:09.353 ************************************ 00:05:09.353 00:05:09.353 real 0m7.445s 00:05:09.353 user 0m1.667s 00:05:09.353 sys 0m2.979s 00:05:09.353 01:39:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.353 01:39:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.353 ************************************ 00:05:09.353 END TEST driver 00:05:09.353 ************************************ 00:05:09.353 01:39:54 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:09.353 01:39:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.353 01:39:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.353 01:39:54 -- common/autotest_common.sh@10 -- # set +x 00:05:09.353 ************************************ 00:05:09.353 START TEST devices 00:05:09.353 ************************************ 00:05:09.353 01:39:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:05:09.353 * Looking for test storage... 00:05:09.353 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:05:09.353 01:39:54 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:09.353 01:39:54 -- setup/devices.sh@192 -- # setup reset 00:05:09.353 01:39:54 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:09.353 01:39:54 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:11.255 01:39:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:11.255 01:39:56 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:11.255 01:39:56 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:11.255 01:39:56 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:11.255 01:39:56 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.255 01:39:56 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:11.255 01:39:56 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:11.255 01:39:56 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.255 01:39:56 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.255 01:39:56 -- setup/devices.sh@196 -- # blocks=() 00:05:11.255 01:39:56 -- setup/devices.sh@196 -- # declare -a blocks 00:05:11.255 01:39:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:11.255 01:39:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:11.255 01:39:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:11.255 01:39:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:11.255 01:39:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:11.255 01:39:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:11.255 01:39:56 -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:11.255 01:39:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:11.255 01:39:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:11.255 01:39:56 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:11.255 01:39:56 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:11.255 No valid GPT data, bailing 00:05:11.255 01:39:56 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.255 01:39:56 -- scripts/common.sh@393 -- # pt= 00:05:11.255 01:39:56 -- scripts/common.sh@394 -- # return 1 00:05:11.255 01:39:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:11.255 01:39:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:11.255 01:39:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:11.255 01:39:56 -- setup/common.sh@80 -- # echo 1000204886016 00:05:11.255 01:39:56 -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:11.255 01:39:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:11.255 01:39:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:11.255 01:39:56 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:11.255 01:39:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:11.255 01:39:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:11.255 01:39:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.255 01:39:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.255 01:39:56 -- common/autotest_common.sh@10 -- # set +x 00:05:11.255 ************************************ 00:05:11.255 START TEST nvme_mount 00:05:11.255 ************************************ 00:05:11.255 01:39:56 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:11.255 01:39:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:11.255 01:39:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:11.255 01:39:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.255 01:39:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.255 01:39:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:11.255 01:39:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:11.255 01:39:56 -- setup/common.sh@40 -- # local part_no=1 00:05:11.255 01:39:56 -- setup/common.sh@41 -- # local size=1073741824 00:05:11.255 01:39:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:11.255 01:39:56 -- setup/common.sh@44 -- # parts=() 00:05:11.255 01:39:56 -- setup/common.sh@44 -- # local parts 00:05:11.255 01:39:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:11.255 01:39:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.255 01:39:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:11.255 01:39:56 -- setup/common.sh@46 -- # (( part++ )) 00:05:11.255 01:39:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:11.255 01:39:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:11.255 01:39:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:11.255 01:39:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:12.190 Creating new GPT entries in memory. 00:05:12.190 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:12.190 other utilities. 00:05:12.190 01:39:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:12.190 01:39:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:12.190 01:39:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:12.190 01:39:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:12.190 01:39:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:13.126 Creating new GPT entries in memory. 00:05:13.126 The operation has completed successfully. 00:05:13.126 01:39:58 -- setup/common.sh@57 -- # (( part++ )) 00:05:13.126 01:39:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:13.126 01:39:58 -- setup/common.sh@62 -- # wait 2026368 00:05:13.126 01:39:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.126 01:39:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:13.126 01:39:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.126 01:39:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:13.126 01:39:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:13.126 01:39:58 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.126 01:39:58 -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.126 01:39:58 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:13.126 01:39:58 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:13.126 01:39:58 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.126 01:39:58 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:13.126 01:39:58 -- setup/devices.sh@53 -- # local found=0 00:05:13.126 01:39:58 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:13.126 01:39:58 -- setup/devices.sh@56 -- # : 00:05:13.126 01:39:58 -- setup/devices.sh@59 -- # local pci status 00:05:13.126 01:39:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.126 01:39:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:13.126 01:39:58 -- setup/devices.sh@47 -- # setup output config 00:05:13.126 01:39:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.126 01:39:58 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:14.061 01:39:59 -- setup/devices.sh@63 -- # found=1 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.061 01:39:59 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:14.061 01:39:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.320 01:39:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:14.320 01:39:59 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:14.320 01:39:59 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.320 01:39:59 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.320 01:39:59 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.320 01:39:59 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:14.320 01:39:59 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.320 01:39:59 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.320 01:39:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:14.320 01:39:59 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:14.320 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:14.320 01:39:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:14.320 01:39:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:14.578 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:14.578 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:14.578 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:14.578 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:14.578 01:40:00 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:14.578 01:40:00 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:14.578 01:40:00 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.578 01:40:00 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:14.578 01:40:00 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:14.578 01:40:00 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.578 01:40:00 -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.578 01:40:00 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:14.578 01:40:00 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:14.578 01:40:00 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:14.578 01:40:00 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:14.578 01:40:00 -- setup/devices.sh@53 -- # local found=0 00:05:14.579 01:40:00 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:14.579 01:40:00 -- setup/devices.sh@56 -- # : 00:05:14.579 01:40:00 -- setup/devices.sh@59 -- # local pci status 00:05:14.579 01:40:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:14.579 01:40:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:14.579 01:40:00 -- setup/devices.sh@47 -- # setup output config 00:05:14.579 01:40:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.579 01:40:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:15.955 01:40:01 -- setup/devices.sh@63 -- # found=1 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.955 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.955 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.956 01:40:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.956 01:40:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.956 01:40:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.956 01:40:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.956 01:40:01 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.956 01:40:01 -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:15.956 01:40:01 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:15.956 01:40:01 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:15.956 01:40:01 -- setup/devices.sh@50 -- # local mount_point= 00:05:15.956 01:40:01 -- setup/devices.sh@51 -- # local test_file= 00:05:15.956 01:40:01 -- setup/devices.sh@53 -- # local found=0 00:05:15.956 01:40:01 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:15.956 01:40:01 -- setup/devices.sh@59 -- # local pci status 00:05:15.956 01:40:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.956 01:40:01 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:15.956 01:40:01 -- setup/devices.sh@47 -- # setup output config 00:05:15.956 01:40:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.956 01:40:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:17.361 01:40:02 -- setup/devices.sh@63 -- # found=1 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:17.361 01:40:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:17.361 01:40:02 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:17.361 01:40:02 -- setup/devices.sh@68 -- # return 0 00:05:17.361 01:40:02 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:17.361 01:40:02 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:17.361 01:40:02 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:17.361 01:40:02 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:17.361 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:17.361 00:05:17.361 real 0m6.472s 00:05:17.361 user 0m1.563s 00:05:17.361 sys 0m2.478s 00:05:17.361 01:40:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.361 01:40:02 -- common/autotest_common.sh@10 -- # set +x 00:05:17.361 ************************************ 00:05:17.361 END TEST nvme_mount 00:05:17.361 ************************************ 00:05:17.361 01:40:03 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:17.361 01:40:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.361 01:40:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.361 01:40:03 -- common/autotest_common.sh@10 -- # set +x 00:05:17.620 ************************************ 00:05:17.620 START TEST dm_mount 00:05:17.620 ************************************ 00:05:17.621 01:40:03 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:17.621 01:40:03 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:17.621 01:40:03 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:17.621 01:40:03 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:17.621 01:40:03 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:17.621 01:40:03 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:17.621 01:40:03 -- setup/common.sh@40 -- # local part_no=2 00:05:17.621 01:40:03 -- setup/common.sh@41 -- # local size=1073741824 00:05:17.621 01:40:03 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:17.621 01:40:03 -- setup/common.sh@44 -- # parts=() 00:05:17.621 01:40:03 -- setup/common.sh@44 -- # local parts 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.621 01:40:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.621 01:40:03 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part++ )) 00:05:17.621 01:40:03 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:17.621 01:40:03 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:17.621 01:40:03 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:17.621 01:40:03 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:18.555 Creating new GPT entries in memory. 00:05:18.555 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:18.555 other utilities. 00:05:18.555 01:40:04 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:18.555 01:40:04 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:18.555 01:40:04 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:18.555 01:40:04 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:18.555 01:40:04 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:19.493 Creating new GPT entries in memory. 00:05:19.493 The operation has completed successfully. 00:05:19.493 01:40:05 -- setup/common.sh@57 -- # (( part++ )) 00:05:19.493 01:40:05 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:19.493 01:40:05 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:19.493 01:40:05 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:19.493 01:40:05 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:20.428 The operation has completed successfully. 00:05:20.428 01:40:06 -- setup/common.sh@57 -- # (( part++ )) 00:05:20.428 01:40:06 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:20.428 01:40:06 -- setup/common.sh@62 -- # wait 2028837 00:05:20.686 01:40:06 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:20.686 01:40:06 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.686 01:40:06 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.686 01:40:06 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:20.686 01:40:06 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:20.686 01:40:06 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.686 01:40:06 -- setup/devices.sh@161 -- # break 00:05:20.686 01:40:06 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.686 01:40:06 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:20.686 01:40:06 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:20.686 01:40:06 -- setup/devices.sh@166 -- # dm=dm-0 00:05:20.686 01:40:06 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:20.686 01:40:06 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:20.686 01:40:06 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.686 01:40:06 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:20.686 01:40:06 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.686 01:40:06 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:20.686 01:40:06 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:20.686 01:40:06 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.686 01:40:06 -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.686 01:40:06 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:20.686 01:40:06 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:20.686 01:40:06 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:20.686 01:40:06 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:20.686 01:40:06 -- setup/devices.sh@53 -- # local found=0 00:05:20.686 01:40:06 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:20.686 01:40:06 -- setup/devices.sh@56 -- # : 00:05:20.686 01:40:06 -- setup/devices.sh@59 -- # local pci status 00:05:20.686 01:40:06 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:20.686 01:40:06 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:20.686 01:40:06 -- setup/devices.sh@47 -- # setup output config 00:05:20.686 01:40:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.686 01:40:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:21.622 01:40:07 -- setup/devices.sh@63 -- # found=1 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.622 01:40:07 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:21.622 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.881 01:40:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:21.881 01:40:07 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:21.881 01:40:07 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.881 01:40:07 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:21.881 01:40:07 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:21.881 01:40:07 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:21.881 01:40:07 -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:21.881 01:40:07 -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:21.881 01:40:07 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:21.881 01:40:07 -- setup/devices.sh@50 -- # local mount_point= 00:05:21.881 01:40:07 -- setup/devices.sh@51 -- # local test_file= 00:05:21.881 01:40:07 -- setup/devices.sh@53 -- # local found=0 00:05:21.881 01:40:07 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:21.881 01:40:07 -- setup/devices.sh@59 -- # local pci status 00:05:21.881 01:40:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:21.881 01:40:07 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:21.881 01:40:07 -- setup/devices.sh@47 -- # setup output config 00:05:21.881 01:40:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.881 01:40:07 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:22.813 01:40:08 -- setup/devices.sh@63 -- # found=1 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.813 01:40:08 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:22.813 01:40:08 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:23.072 01:40:08 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:23.072 01:40:08 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:23.072 01:40:08 -- setup/devices.sh@68 -- # return 0 00:05:23.072 01:40:08 -- setup/devices.sh@187 -- # cleanup_dm 00:05:23.072 01:40:08 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.072 01:40:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.072 01:40:08 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:23.072 01:40:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.072 01:40:08 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:23.072 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:23.072 01:40:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.072 01:40:08 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:23.072 00:05:23.072 real 0m5.675s 00:05:23.072 user 0m0.974s 00:05:23.072 sys 0m1.558s 00:05:23.072 01:40:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.072 01:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:23.072 ************************************ 00:05:23.072 END TEST dm_mount 00:05:23.072 ************************************ 00:05:23.072 01:40:08 -- setup/devices.sh@1 -- # cleanup 00:05:23.072 01:40:08 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:23.072 01:40:08 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:23.072 01:40:08 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.072 01:40:08 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:23.072 01:40:08 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.072 01:40:08 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:23.330 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:23.330 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:23.330 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:23.330 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:23.330 01:40:08 -- setup/devices.sh@12 -- # cleanup_dm 00:05:23.330 01:40:08 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:23.588 01:40:08 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:23.588 01:40:08 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:23.588 01:40:08 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:23.588 01:40:08 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:23.588 01:40:08 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:23.588 00:05:23.588 real 0m14.065s 00:05:23.588 user 0m3.175s 00:05:23.588 sys 0m5.086s 00:05:23.588 01:40:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.588 01:40:08 -- common/autotest_common.sh@10 -- # set +x 00:05:23.588 ************************************ 00:05:23.588 END TEST devices 00:05:23.588 ************************************ 00:05:23.588 00:05:23.588 real 0m43.553s 00:05:23.588 user 0m12.520s 00:05:23.588 sys 0m19.377s 00:05:23.588 01:40:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.588 01:40:09 -- common/autotest_common.sh@10 -- # set +x 00:05:23.588 ************************************ 00:05:23.588 END TEST setup.sh 00:05:23.588 ************************************ 00:05:23.588 01:40:09 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:24.522 Hugepages 00:05:24.522 node hugesize free / total 00:05:24.522 node0 1048576kB 0 / 0 00:05:24.522 node0 2048kB 2048 / 2048 00:05:24.522 node1 1048576kB 0 / 0 00:05:24.522 node1 2048kB 0 / 0 00:05:24.522 00:05:24.522 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:24.522 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:24.522 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:24.522 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:24.781 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:24.781 01:40:10 -- spdk/autotest.sh@141 -- # uname -s 00:05:24.781 01:40:10 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:24.781 01:40:10 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:24.781 01:40:10 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:26.158 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.158 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:26.158 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:27.093 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:27.093 01:40:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:28.028 01:40:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:28.028 01:40:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:28.028 01:40:13 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:28.028 01:40:13 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:28.028 01:40:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:28.028 01:40:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:28.028 01:40:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:28.028 01:40:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:28.028 01:40:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:28.028 01:40:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:28.028 01:40:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:28.028 01:40:13 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:29.403 Waiting for block devices as requested 00:05:29.403 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:29.403 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:29.661 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:29.661 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:29.662 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:29.662 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:29.920 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:29.920 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:29.920 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:29.920 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:30.179 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:30.179 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:30.179 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:30.179 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:30.437 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:30.437 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:30.437 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:30.696 01:40:16 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:30.696 01:40:16 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1487 -- # grep 0000:88:00.0/nvme/nvme 00:05:30.696 01:40:16 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:30.696 01:40:16 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:30.696 01:40:16 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:30.696 01:40:16 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:30.696 01:40:16 -- common/autotest_common.sh@1530 -- # oacs=' 0xf' 00:05:30.696 01:40:16 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:30.696 01:40:16 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:30.696 01:40:16 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:30.696 01:40:16 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:30.696 01:40:16 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:30.696 01:40:16 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:30.696 01:40:16 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:30.696 01:40:16 -- common/autotest_common.sh@1542 -- # continue 00:05:30.696 01:40:16 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:30.696 01:40:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:30.696 01:40:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 01:40:16 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:30.696 01:40:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:30.696 01:40:16 -- common/autotest_common.sh@10 -- # set +x 00:05:30.696 01:40:16 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:32.114 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.114 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:32.114 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:33.051 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:33.051 01:40:18 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:33.051 01:40:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.051 01:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:33.051 01:40:18 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:33.051 01:40:18 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:33.051 01:40:18 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:33.051 01:40:18 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:33.051 01:40:18 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:33.051 01:40:18 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:33.051 01:40:18 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:33.051 01:40:18 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:33.051 01:40:18 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:33.051 01:40:18 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:33.051 01:40:18 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:33.051 01:40:18 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:05:33.051 01:40:18 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:05:33.051 01:40:18 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:33.051 01:40:18 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:33.051 01:40:18 -- common/autotest_common.sh@1565 -- # device=0x0a54 00:05:33.051 01:40:18 -- common/autotest_common.sh@1566 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:33.051 01:40:18 -- common/autotest_common.sh@1567 -- # bdfs+=($bdf) 00:05:33.051 01:40:18 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:88:00.0 00:05:33.051 01:40:18 -- common/autotest_common.sh@1577 -- # [[ -z 0000:88:00.0 ]] 00:05:33.051 01:40:18 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=2034145 00:05:33.051 01:40:18 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:33.051 01:40:18 -- common/autotest_common.sh@1583 -- # waitforlisten 2034145 00:05:33.051 01:40:18 -- common/autotest_common.sh@819 -- # '[' -z 2034145 ']' 00:05:33.051 01:40:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:33.051 01:40:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:33.051 01:40:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:33.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:33.051 01:40:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:33.051 01:40:18 -- common/autotest_common.sh@10 -- # set +x 00:05:33.310 [2024-04-15 01:40:18.720966] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:33.310 [2024-04-15 01:40:18.721088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2034145 ] 00:05:33.310 EAL: No free 2048 kB hugepages reported on node 1 00:05:33.310 [2024-04-15 01:40:18.784111] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.310 [2024-04-15 01:40:18.871940] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.310 [2024-04-15 01:40:18.872137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.244 01:40:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:34.244 01:40:19 -- common/autotest_common.sh@852 -- # return 0 00:05:34.244 01:40:19 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:05:34.244 01:40:19 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:05:34.245 01:40:19 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:37.531 nvme0n1 00:05:37.531 01:40:22 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:37.531 [2024-04-15 01:40:22.942555] nvme_opal.c:2059:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:37.531 [2024-04-15 01:40:22.942597] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:37.531 request: 00:05:37.531 { 00:05:37.531 "nvme_ctrlr_name": "nvme0", 00:05:37.531 "password": "test", 00:05:37.531 "method": "bdev_nvme_opal_revert", 00:05:37.531 "req_id": 1 00:05:37.531 } 00:05:37.531 Got JSON-RPC error response 00:05:37.531 response: 00:05:37.531 { 00:05:37.531 "code": -32603, 00:05:37.531 "message": "Internal error" 00:05:37.531 } 00:05:37.531 01:40:22 -- common/autotest_common.sh@1589 -- # true 00:05:37.531 01:40:22 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:05:37.531 01:40:22 -- common/autotest_common.sh@1593 -- # killprocess 2034145 00:05:37.531 01:40:22 -- common/autotest_common.sh@926 -- # '[' -z 2034145 ']' 00:05:37.531 01:40:22 -- common/autotest_common.sh@930 -- # kill -0 2034145 00:05:37.531 01:40:22 -- common/autotest_common.sh@931 -- # uname 00:05:37.531 01:40:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:37.531 01:40:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2034145 00:05:37.531 01:40:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:37.531 01:40:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:37.531 01:40:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2034145' 00:05:37.531 killing process with pid 2034145 00:05:37.531 01:40:22 -- common/autotest_common.sh@945 -- # kill 2034145 00:05:37.531 01:40:22 -- common/autotest_common.sh@950 -- # wait 2034145 00:05:39.430 01:40:24 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:39.430 01:40:24 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:39.430 01:40:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:39.430 01:40:24 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:39.430 01:40:24 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:39.430 01:40:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:39.430 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 01:40:24 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.430 01:40:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.430 01:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.430 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 START TEST env 00:05:39.430 ************************************ 00:05:39.430 01:40:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:39.430 * Looking for test storage... 00:05:39.430 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:39.430 01:40:24 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.430 01:40:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.430 01:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.430 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 START TEST env_memory 00:05:39.430 ************************************ 00:05:39.430 01:40:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:39.430 00:05:39.430 00:05:39.430 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.430 http://cunit.sourceforge.net/ 00:05:39.430 00:05:39.430 00:05:39.430 Suite: memory 00:05:39.430 Test: alloc and free memory map ...[2024-04-15 01:40:24.856171] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.430 passed 00:05:39.430 Test: mem map translation ...[2024-04-15 01:40:24.876557] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.430 [2024-04-15 01:40:24.876582] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.430 [2024-04-15 01:40:24.876638] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.430 [2024-04-15 01:40:24.876650] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.430 passed 00:05:39.430 Test: mem map registration ...[2024-04-15 01:40:24.918357] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.430 [2024-04-15 01:40:24.918376] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.430 passed 00:05:39.430 Test: mem map adjacent registrations ...passed 00:05:39.430 00:05:39.430 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.430 suites 1 1 n/a 0 0 00:05:39.430 tests 4 4 4 0 0 00:05:39.430 asserts 152 152 152 0 n/a 00:05:39.430 00:05:39.430 Elapsed time = 0.139 seconds 00:05:39.430 00:05:39.430 real 0m0.145s 00:05:39.430 user 0m0.140s 00:05:39.430 sys 0m0.005s 00:05:39.430 01:40:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.430 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 END TEST env_memory 00:05:39.430 ************************************ 00:05:39.430 01:40:24 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.430 01:40:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.430 01:40:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.430 01:40:24 -- common/autotest_common.sh@10 -- # set +x 00:05:39.430 ************************************ 00:05:39.430 START TEST env_vtophys 00:05:39.430 ************************************ 00:05:39.430 01:40:24 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:39.430 EAL: lib.eal log level changed from notice to debug 00:05:39.430 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.430 EAL: Detected lcore 1 as core 1 on socket 0 00:05:39.430 EAL: Detected lcore 2 as core 2 on socket 0 00:05:39.430 EAL: Detected lcore 3 as core 3 on socket 0 00:05:39.430 EAL: Detected lcore 4 as core 4 on socket 0 00:05:39.430 EAL: Detected lcore 5 as core 5 on socket 0 00:05:39.430 EAL: Detected lcore 6 as core 8 on socket 0 00:05:39.430 EAL: Detected lcore 7 as core 9 on socket 0 00:05:39.430 EAL: Detected lcore 8 as core 10 on socket 0 00:05:39.430 EAL: Detected lcore 9 as core 11 on socket 0 00:05:39.430 EAL: Detected lcore 10 as core 12 on socket 0 00:05:39.430 EAL: Detected lcore 11 as core 13 on socket 0 00:05:39.430 EAL: Detected lcore 12 as core 0 on socket 1 00:05:39.430 EAL: Detected lcore 13 as core 1 on socket 1 00:05:39.430 EAL: Detected lcore 14 as core 2 on socket 1 00:05:39.430 EAL: Detected lcore 15 as core 3 on socket 1 00:05:39.430 EAL: Detected lcore 16 as core 4 on socket 1 00:05:39.430 EAL: Detected lcore 17 as core 5 on socket 1 00:05:39.430 EAL: Detected lcore 18 as core 8 on socket 1 00:05:39.430 EAL: Detected lcore 19 as core 9 on socket 1 00:05:39.430 EAL: Detected lcore 20 as core 10 on socket 1 00:05:39.430 EAL: Detected lcore 21 as core 11 on socket 1 00:05:39.430 EAL: Detected lcore 22 as core 12 on socket 1 00:05:39.430 EAL: Detected lcore 23 as core 13 on socket 1 00:05:39.430 EAL: Detected lcore 24 as core 0 on socket 0 00:05:39.430 EAL: Detected lcore 25 as core 1 on socket 0 00:05:39.430 EAL: Detected lcore 26 as core 2 on socket 0 00:05:39.430 EAL: Detected lcore 27 as core 3 on socket 0 00:05:39.430 EAL: Detected lcore 28 as core 4 on socket 0 00:05:39.430 EAL: Detected lcore 29 as core 5 on socket 0 00:05:39.430 EAL: Detected lcore 30 as core 8 on socket 0 00:05:39.430 EAL: Detected lcore 31 as core 9 on socket 0 00:05:39.430 EAL: Detected lcore 32 as core 10 on socket 0 00:05:39.430 EAL: Detected lcore 33 as core 11 on socket 0 00:05:39.430 EAL: Detected lcore 34 as core 12 on socket 0 00:05:39.430 EAL: Detected lcore 35 as core 13 on socket 0 00:05:39.430 EAL: Detected lcore 36 as core 0 on socket 1 00:05:39.430 EAL: Detected lcore 37 as core 1 on socket 1 00:05:39.430 EAL: Detected lcore 38 as core 2 on socket 1 00:05:39.430 EAL: Detected lcore 39 as core 3 on socket 1 00:05:39.430 EAL: Detected lcore 40 as core 4 on socket 1 00:05:39.430 EAL: Detected lcore 41 as core 5 on socket 1 00:05:39.430 EAL: Detected lcore 42 as core 8 on socket 1 00:05:39.430 EAL: Detected lcore 43 as core 9 on socket 1 00:05:39.430 EAL: Detected lcore 44 as core 10 on socket 1 00:05:39.430 EAL: Detected lcore 45 as core 11 on socket 1 00:05:39.430 EAL: Detected lcore 46 as core 12 on socket 1 00:05:39.430 EAL: Detected lcore 47 as core 13 on socket 1 00:05:39.430 EAL: Maximum logical cores by configuration: 128 00:05:39.430 EAL: Detected CPU lcores: 48 00:05:39.430 EAL: Detected NUMA nodes: 2 00:05:39.430 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:39.430 EAL: Detected shared linkage of DPDK 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:39.430 EAL: Registered [vdev] bus. 00:05:39.430 EAL: bus.vdev log level changed from disabled to notice 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:39.430 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:39.430 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:39.430 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:39.430 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.430 EAL: No shared files mode enabled, IPC is disabled 00:05:39.430 EAL: Bus pci wants IOVA as 'DC' 00:05:39.430 EAL: Bus vdev wants IOVA as 'DC' 00:05:39.430 EAL: Buses did not request a specific IOVA mode. 00:05:39.430 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:39.430 EAL: Selected IOVA mode 'VA' 00:05:39.430 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.430 EAL: Probing VFIO support... 00:05:39.430 EAL: IOMMU type 1 (Type 1) is supported 00:05:39.430 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:39.430 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:39.430 EAL: VFIO support initialized 00:05:39.430 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.430 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.430 EAL: Setting up physically contiguous memory... 00:05:39.430 EAL: Setting maximum number of open files to 524288 00:05:39.430 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.430 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:39.431 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.431 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:39.431 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.431 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:39.431 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:39.431 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.431 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:39.431 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:39.431 EAL: Hugepages will be freed exactly as allocated. 00:05:39.431 EAL: No shared files mode enabled, IPC is disabled 00:05:39.431 EAL: No shared files mode enabled, IPC is disabled 00:05:39.431 EAL: TSC frequency is ~2700000 KHz 00:05:39.431 EAL: Main lcore 0 is ready (tid=7f84c6ceea00;cpuset=[0]) 00:05:39.431 EAL: Trying to obtain current memory policy. 00:05:39.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.431 EAL: Restoring previous memory policy: 0 00:05:39.431 EAL: request: mp_malloc_sync 00:05:39.431 EAL: No shared files mode enabled, IPC is disabled 00:05:39.431 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.431 EAL: No shared files mode enabled, IPC is disabled 00:05:39.431 EAL: No shared files mode enabled, IPC is disabled 00:05:39.431 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.431 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.431 00:05:39.431 00:05:39.431 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.431 http://cunit.sourceforge.net/ 00:05:39.431 00:05:39.431 00:05:39.431 Suite: components_suite 00:05:39.431 Test: vtophys_malloc_test ...passed 00:05:39.431 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.431 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.689 EAL: Restoring previous memory policy: 4 00:05:39.689 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.689 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.690 EAL: Trying to obtain current memory policy. 00:05:39.690 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.690 EAL: Restoring previous memory policy: 4 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.690 EAL: request: mp_malloc_sync 00:05:39.690 EAL: No shared files mode enabled, IPC is disabled 00:05:39.690 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.690 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.948 EAL: request: mp_malloc_sync 00:05:39.948 EAL: No shared files mode enabled, IPC is disabled 00:05:39.948 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.948 EAL: Trying to obtain current memory policy. 00:05:39.948 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.948 EAL: Restoring previous memory policy: 4 00:05:39.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.948 EAL: request: mp_malloc_sync 00:05:39.948 EAL: No shared files mode enabled, IPC is disabled 00:05:39.948 EAL: Heap on socket 0 was expanded by 514MB 00:05:40.206 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.206 EAL: request: mp_malloc_sync 00:05:40.206 EAL: No shared files mode enabled, IPC is disabled 00:05:40.206 EAL: Heap on socket 0 was shrunk by 514MB 00:05:40.206 EAL: Trying to obtain current memory policy. 00:05:40.206 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.464 EAL: Restoring previous memory policy: 4 00:05:40.464 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.464 EAL: request: mp_malloc_sync 00:05:40.464 EAL: No shared files mode enabled, IPC is disabled 00:05:40.464 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.722 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.980 EAL: request: mp_malloc_sync 00:05:40.980 EAL: No shared files mode enabled, IPC is disabled 00:05:40.980 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.980 passed 00:05:40.980 00:05:40.980 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.980 suites 1 1 n/a 0 0 00:05:40.980 tests 2 2 2 0 0 00:05:40.980 asserts 497 497 497 0 n/a 00:05:40.980 00:05:40.980 Elapsed time = 1.361 seconds 00:05:40.980 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.980 EAL: request: mp_malloc_sync 00:05:40.980 EAL: No shared files mode enabled, IPC is disabled 00:05:40.980 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.980 EAL: No shared files mode enabled, IPC is disabled 00:05:40.980 EAL: No shared files mode enabled, IPC is disabled 00:05:40.980 EAL: No shared files mode enabled, IPC is disabled 00:05:40.980 00:05:40.980 real 0m1.473s 00:05:40.980 user 0m0.847s 00:05:40.980 sys 0m0.598s 00:05:40.980 01:40:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.980 01:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:40.980 ************************************ 00:05:40.980 END TEST env_vtophys 00:05:40.980 ************************************ 00:05:40.980 01:40:26 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.981 01:40:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.981 01:40:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.981 01:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:40.981 ************************************ 00:05:40.981 START TEST env_pci 00:05:40.981 ************************************ 00:05:40.981 01:40:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:40.981 00:05:40.981 00:05:40.981 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.981 http://cunit.sourceforge.net/ 00:05:40.981 00:05:40.981 00:05:40.981 Suite: pci 00:05:40.981 Test: pci_hook ...[2024-04-15 01:40:26.506820] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2035178 has claimed it 00:05:40.981 EAL: Cannot find device (10000:00:01.0) 00:05:40.981 EAL: Failed to attach device on primary process 00:05:40.981 passed 00:05:40.981 00:05:40.981 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.981 suites 1 1 n/a 0 0 00:05:40.981 tests 1 1 1 0 0 00:05:40.981 asserts 25 25 25 0 n/a 00:05:40.981 00:05:40.981 Elapsed time = 0.018 seconds 00:05:40.981 00:05:40.981 real 0m0.029s 00:05:40.981 user 0m0.010s 00:05:40.981 sys 0m0.019s 00:05:40.981 01:40:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.981 01:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:40.981 ************************************ 00:05:40.981 END TEST env_pci 00:05:40.981 ************************************ 00:05:40.981 01:40:26 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.981 01:40:26 -- env/env.sh@15 -- # uname 00:05:40.981 01:40:26 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.981 01:40:26 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.981 01:40:26 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.981 01:40:26 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:40.981 01:40:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.981 01:40:26 -- common/autotest_common.sh@10 -- # set +x 00:05:40.981 ************************************ 00:05:40.981 START TEST env_dpdk_post_init 00:05:40.981 ************************************ 00:05:40.981 01:40:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.981 EAL: Detected CPU lcores: 48 00:05:40.981 EAL: Detected NUMA nodes: 2 00:05:40.981 EAL: Detected shared linkage of DPDK 00:05:40.981 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.981 EAL: Selected IOVA mode 'VA' 00:05:40.981 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.981 EAL: VFIO support initialized 00:05:40.981 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:41.239 EAL: Using IOMMU type 1 (Type 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:41.239 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:41.240 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:42.172 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:45.452 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:45.452 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:45.452 Starting DPDK initialization... 00:05:45.452 Starting SPDK post initialization... 00:05:45.452 SPDK NVMe probe 00:05:45.452 Attaching to 0000:88:00.0 00:05:45.452 Attached to 0000:88:00.0 00:05:45.452 Cleaning up... 00:05:45.452 00:05:45.452 real 0m4.369s 00:05:45.452 user 0m3.224s 00:05:45.452 sys 0m0.204s 00:05:45.452 01:40:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.452 01:40:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.452 ************************************ 00:05:45.452 END TEST env_dpdk_post_init 00:05:45.452 ************************************ 00:05:45.452 01:40:30 -- env/env.sh@26 -- # uname 00:05:45.452 01:40:30 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:45.452 01:40:30 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.452 01:40:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.452 01:40:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.452 01:40:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.452 ************************************ 00:05:45.452 START TEST env_mem_callbacks 00:05:45.452 ************************************ 00:05:45.452 01:40:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:45.452 EAL: Detected CPU lcores: 48 00:05:45.452 EAL: Detected NUMA nodes: 2 00:05:45.452 EAL: Detected shared linkage of DPDK 00:05:45.452 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:45.452 EAL: Selected IOVA mode 'VA' 00:05:45.452 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.452 EAL: VFIO support initialized 00:05:45.452 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:45.452 00:05:45.452 00:05:45.452 CUnit - A unit testing framework for C - Version 2.1-3 00:05:45.452 http://cunit.sourceforge.net/ 00:05:45.452 00:05:45.452 00:05:45.452 Suite: memory 00:05:45.452 Test: test ... 00:05:45.452 register 0x200000200000 2097152 00:05:45.452 malloc 3145728 00:05:45.452 register 0x200000400000 4194304 00:05:45.452 buf 0x200000500000 len 3145728 PASSED 00:05:45.452 malloc 64 00:05:45.452 buf 0x2000004fff40 len 64 PASSED 00:05:45.452 malloc 4194304 00:05:45.452 register 0x200000800000 6291456 00:05:45.452 buf 0x200000a00000 len 4194304 PASSED 00:05:45.452 free 0x200000500000 3145728 00:05:45.452 free 0x2000004fff40 64 00:05:45.452 unregister 0x200000400000 4194304 PASSED 00:05:45.452 free 0x200000a00000 4194304 00:05:45.452 unregister 0x200000800000 6291456 PASSED 00:05:45.452 malloc 8388608 00:05:45.452 register 0x200000400000 10485760 00:05:45.452 buf 0x200000600000 len 8388608 PASSED 00:05:45.452 free 0x200000600000 8388608 00:05:45.452 unregister 0x200000400000 10485760 PASSED 00:05:45.452 passed 00:05:45.452 00:05:45.452 Run Summary: Type Total Ran Passed Failed Inactive 00:05:45.452 suites 1 1 n/a 0 0 00:05:45.453 tests 1 1 1 0 0 00:05:45.453 asserts 15 15 15 0 n/a 00:05:45.453 00:05:45.453 Elapsed time = 0.005 seconds 00:05:45.453 00:05:45.453 real 0m0.043s 00:05:45.453 user 0m0.009s 00:05:45.453 sys 0m0.034s 00:05:45.453 01:40:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.453 01:40:30 -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 ************************************ 00:05:45.453 END TEST env_mem_callbacks 00:05:45.453 ************************************ 00:05:45.453 00:05:45.453 real 0m6.242s 00:05:45.453 user 0m4.299s 00:05:45.453 sys 0m0.996s 00:05:45.453 01:40:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.453 01:40:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 ************************************ 00:05:45.453 END TEST env 00:05:45.453 ************************************ 00:05:45.453 01:40:31 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.453 01:40:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:45.453 01:40:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:45.453 01:40:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.453 ************************************ 00:05:45.453 START TEST rpc 00:05:45.453 ************************************ 00:05:45.453 01:40:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:45.453 * Looking for test storage... 00:05:45.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:45.453 01:40:31 -- rpc/rpc.sh@65 -- # spdk_pid=2035838 00:05:45.453 01:40:31 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:45.453 01:40:31 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:45.453 01:40:31 -- rpc/rpc.sh@67 -- # waitforlisten 2035838 00:05:45.453 01:40:31 -- common/autotest_common.sh@819 -- # '[' -z 2035838 ']' 00:05:45.453 01:40:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.453 01:40:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:45.453 01:40:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.453 01:40:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:45.453 01:40:31 -- common/autotest_common.sh@10 -- # set +x 00:05:45.712 [2024-04-15 01:40:31.133261] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:45.712 [2024-04-15 01:40:31.133371] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2035838 ] 00:05:45.712 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.712 [2024-04-15 01:40:31.193303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.712 [2024-04-15 01:40:31.274617] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:45.712 [2024-04-15 01:40:31.274782] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:45.712 [2024-04-15 01:40:31.274799] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2035838' to capture a snapshot of events at runtime. 00:05:45.712 [2024-04-15 01:40:31.274812] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2035838 for offline analysis/debug. 00:05:45.712 [2024-04-15 01:40:31.274840] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.647 01:40:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:46.647 01:40:32 -- common/autotest_common.sh@852 -- # return 0 00:05:46.647 01:40:32 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.647 01:40:32 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:46.647 01:40:32 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:46.647 01:40:32 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:46.647 01:40:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.647 01:40:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 ************************************ 00:05:46.647 START TEST rpc_integrity 00:05:46.647 ************************************ 00:05:46.647 01:40:32 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:46.647 01:40:32 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:46.647 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.647 01:40:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:46.647 01:40:32 -- rpc/rpc.sh@13 -- # jq length 00:05:46.647 01:40:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:46.647 01:40:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:46.647 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.647 01:40:32 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:46.647 01:40:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:46.647 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.647 01:40:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:46.647 { 00:05:46.647 "name": "Malloc0", 00:05:46.647 "aliases": [ 00:05:46.647 "c03ed1c2-6ddd-49b9-abc4-74d814454e0a" 00:05:46.647 ], 00:05:46.647 "product_name": "Malloc disk", 00:05:46.647 "block_size": 512, 00:05:46.647 "num_blocks": 16384, 00:05:46.647 "uuid": "c03ed1c2-6ddd-49b9-abc4-74d814454e0a", 00:05:46.647 "assigned_rate_limits": { 00:05:46.647 "rw_ios_per_sec": 0, 00:05:46.647 "rw_mbytes_per_sec": 0, 00:05:46.647 "r_mbytes_per_sec": 0, 00:05:46.647 "w_mbytes_per_sec": 0 00:05:46.647 }, 00:05:46.647 "claimed": false, 00:05:46.647 "zoned": false, 00:05:46.647 "supported_io_types": { 00:05:46.647 "read": true, 00:05:46.647 "write": true, 00:05:46.647 "unmap": true, 00:05:46.647 "write_zeroes": true, 00:05:46.647 "flush": true, 00:05:46.647 "reset": true, 00:05:46.647 "compare": false, 00:05:46.647 "compare_and_write": false, 00:05:46.647 "abort": true, 00:05:46.647 "nvme_admin": false, 00:05:46.647 "nvme_io": false 00:05:46.647 }, 00:05:46.647 "memory_domains": [ 00:05:46.647 { 00:05:46.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.647 "dma_device_type": 2 00:05:46.647 } 00:05:46.647 ], 00:05:46.647 "driver_specific": {} 00:05:46.647 } 00:05:46.647 ]' 00:05:46.647 01:40:32 -- rpc/rpc.sh@17 -- # jq length 00:05:46.647 01:40:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:46.647 01:40:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:46.647 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 [2024-04-15 01:40:32.178351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:46.647 [2024-04-15 01:40:32.178403] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:46.647 [2024-04-15 01:40:32.178430] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18cf410 00:05:46.647 [2024-04-15 01:40:32.178446] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:46.647 [2024-04-15 01:40:32.179943] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:46.647 [2024-04-15 01:40:32.179971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:46.647 Passthru0 00:05:46.647 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.647 01:40:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:46.647 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.647 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.647 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.647 01:40:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:46.647 { 00:05:46.647 "name": "Malloc0", 00:05:46.647 "aliases": [ 00:05:46.647 "c03ed1c2-6ddd-49b9-abc4-74d814454e0a" 00:05:46.647 ], 00:05:46.647 "product_name": "Malloc disk", 00:05:46.647 "block_size": 512, 00:05:46.647 "num_blocks": 16384, 00:05:46.647 "uuid": "c03ed1c2-6ddd-49b9-abc4-74d814454e0a", 00:05:46.647 "assigned_rate_limits": { 00:05:46.647 "rw_ios_per_sec": 0, 00:05:46.647 "rw_mbytes_per_sec": 0, 00:05:46.647 "r_mbytes_per_sec": 0, 00:05:46.647 "w_mbytes_per_sec": 0 00:05:46.647 }, 00:05:46.647 "claimed": true, 00:05:46.647 "claim_type": "exclusive_write", 00:05:46.647 "zoned": false, 00:05:46.647 "supported_io_types": { 00:05:46.647 "read": true, 00:05:46.647 "write": true, 00:05:46.647 "unmap": true, 00:05:46.647 "write_zeroes": true, 00:05:46.647 "flush": true, 00:05:46.647 "reset": true, 00:05:46.647 "compare": false, 00:05:46.647 "compare_and_write": false, 00:05:46.647 "abort": true, 00:05:46.647 "nvme_admin": false, 00:05:46.647 "nvme_io": false 00:05:46.647 }, 00:05:46.647 "memory_domains": [ 00:05:46.647 { 00:05:46.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.647 "dma_device_type": 2 00:05:46.647 } 00:05:46.647 ], 00:05:46.647 "driver_specific": {} 00:05:46.647 }, 00:05:46.647 { 00:05:46.647 "name": "Passthru0", 00:05:46.647 "aliases": [ 00:05:46.647 "37dda31e-ff94-5dfe-8307-872c36f72dfa" 00:05:46.647 ], 00:05:46.647 "product_name": "passthru", 00:05:46.647 "block_size": 512, 00:05:46.647 "num_blocks": 16384, 00:05:46.647 "uuid": "37dda31e-ff94-5dfe-8307-872c36f72dfa", 00:05:46.647 "assigned_rate_limits": { 00:05:46.647 "rw_ios_per_sec": 0, 00:05:46.647 "rw_mbytes_per_sec": 0, 00:05:46.647 "r_mbytes_per_sec": 0, 00:05:46.647 "w_mbytes_per_sec": 0 00:05:46.647 }, 00:05:46.647 "claimed": false, 00:05:46.647 "zoned": false, 00:05:46.647 "supported_io_types": { 00:05:46.647 "read": true, 00:05:46.647 "write": true, 00:05:46.647 "unmap": true, 00:05:46.647 "write_zeroes": true, 00:05:46.647 "flush": true, 00:05:46.647 "reset": true, 00:05:46.647 "compare": false, 00:05:46.647 "compare_and_write": false, 00:05:46.647 "abort": true, 00:05:46.647 "nvme_admin": false, 00:05:46.647 "nvme_io": false 00:05:46.647 }, 00:05:46.647 "memory_domains": [ 00:05:46.647 { 00:05:46.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.648 "dma_device_type": 2 00:05:46.648 } 00:05:46.648 ], 00:05:46.648 "driver_specific": { 00:05:46.648 "passthru": { 00:05:46.648 "name": "Passthru0", 00:05:46.648 "base_bdev_name": "Malloc0" 00:05:46.648 } 00:05:46.648 } 00:05:46.648 } 00:05:46.648 ]' 00:05:46.648 01:40:32 -- rpc/rpc.sh@21 -- # jq length 00:05:46.648 01:40:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:46.648 01:40:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:46.648 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.648 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.648 01:40:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:46.648 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.648 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.648 01:40:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:46.648 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.648 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.648 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.648 01:40:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:46.648 01:40:32 -- rpc/rpc.sh@26 -- # jq length 00:05:46.905 01:40:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:46.905 00:05:46.905 real 0m0.229s 00:05:46.905 user 0m0.154s 00:05:46.905 sys 0m0.015s 00:05:46.905 01:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 ************************************ 00:05:46.905 END TEST rpc_integrity 00:05:46.905 ************************************ 00:05:46.905 01:40:32 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:46.905 01:40:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.905 01:40:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 ************************************ 00:05:46.905 START TEST rpc_plugins 00:05:46.905 ************************************ 00:05:46.905 01:40:32 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:46.905 01:40:32 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:46.905 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.905 01:40:32 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:46.905 01:40:32 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:46.905 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.905 01:40:32 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:46.905 { 00:05:46.905 "name": "Malloc1", 00:05:46.905 "aliases": [ 00:05:46.905 "377e2b47-e99a-49ec-8fee-e3567f1b7b83" 00:05:46.905 ], 00:05:46.905 "product_name": "Malloc disk", 00:05:46.905 "block_size": 4096, 00:05:46.905 "num_blocks": 256, 00:05:46.905 "uuid": "377e2b47-e99a-49ec-8fee-e3567f1b7b83", 00:05:46.905 "assigned_rate_limits": { 00:05:46.905 "rw_ios_per_sec": 0, 00:05:46.905 "rw_mbytes_per_sec": 0, 00:05:46.905 "r_mbytes_per_sec": 0, 00:05:46.905 "w_mbytes_per_sec": 0 00:05:46.905 }, 00:05:46.905 "claimed": false, 00:05:46.905 "zoned": false, 00:05:46.905 "supported_io_types": { 00:05:46.905 "read": true, 00:05:46.905 "write": true, 00:05:46.905 "unmap": true, 00:05:46.905 "write_zeroes": true, 00:05:46.905 "flush": true, 00:05:46.905 "reset": true, 00:05:46.905 "compare": false, 00:05:46.905 "compare_and_write": false, 00:05:46.905 "abort": true, 00:05:46.905 "nvme_admin": false, 00:05:46.905 "nvme_io": false 00:05:46.905 }, 00:05:46.905 "memory_domains": [ 00:05:46.905 { 00:05:46.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:46.905 "dma_device_type": 2 00:05:46.905 } 00:05:46.905 ], 00:05:46.905 "driver_specific": {} 00:05:46.905 } 00:05:46.905 ]' 00:05:46.905 01:40:32 -- rpc/rpc.sh@32 -- # jq length 00:05:46.905 01:40:32 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:46.905 01:40:32 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:46.905 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.905 01:40:32 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:46.905 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.905 01:40:32 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:46.905 01:40:32 -- rpc/rpc.sh@36 -- # jq length 00:05:46.905 01:40:32 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:46.905 00:05:46.905 real 0m0.113s 00:05:46.905 user 0m0.074s 00:05:46.905 sys 0m0.009s 00:05:46.905 01:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 ************************************ 00:05:46.905 END TEST rpc_plugins 00:05:46.905 ************************************ 00:05:46.905 01:40:32 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:46.905 01:40:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:46.905 01:40:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 ************************************ 00:05:46.905 START TEST rpc_trace_cmd_test 00:05:46.905 ************************************ 00:05:46.905 01:40:32 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:46.905 01:40:32 -- rpc/rpc.sh@40 -- # local info 00:05:46.905 01:40:32 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:46.905 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:46.905 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:46.905 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:46.905 01:40:32 -- rpc/rpc.sh@42 -- # info='{ 00:05:46.905 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2035838", 00:05:46.905 "tpoint_group_mask": "0x8", 00:05:46.905 "iscsi_conn": { 00:05:46.906 "mask": "0x2", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "scsi": { 00:05:46.906 "mask": "0x4", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "bdev": { 00:05:46.906 "mask": "0x8", 00:05:46.906 "tpoint_mask": "0xffffffffffffffff" 00:05:46.906 }, 00:05:46.906 "nvmf_rdma": { 00:05:46.906 "mask": "0x10", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "nvmf_tcp": { 00:05:46.906 "mask": "0x20", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "ftl": { 00:05:46.906 "mask": "0x40", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "blobfs": { 00:05:46.906 "mask": "0x80", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "dsa": { 00:05:46.906 "mask": "0x200", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "thread": { 00:05:46.906 "mask": "0x400", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "nvme_pcie": { 00:05:46.906 "mask": "0x800", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "iaa": { 00:05:46.906 "mask": "0x1000", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "nvme_tcp": { 00:05:46.906 "mask": "0x2000", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 }, 00:05:46.906 "bdev_nvme": { 00:05:46.906 "mask": "0x4000", 00:05:46.906 "tpoint_mask": "0x0" 00:05:46.906 } 00:05:46.906 }' 00:05:46.906 01:40:32 -- rpc/rpc.sh@43 -- # jq length 00:05:46.906 01:40:32 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:46.906 01:40:32 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:47.163 01:40:32 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:47.163 01:40:32 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:47.163 01:40:32 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:47.163 01:40:32 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:47.163 01:40:32 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:47.163 01:40:32 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:47.163 01:40:32 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:47.163 00:05:47.163 real 0m0.203s 00:05:47.163 user 0m0.175s 00:05:47.163 sys 0m0.018s 00:05:47.163 01:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 ************************************ 00:05:47.163 END TEST rpc_trace_cmd_test 00:05:47.163 ************************************ 00:05:47.163 01:40:32 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:47.163 01:40:32 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:47.163 01:40:32 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:47.163 01:40:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.163 01:40:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 ************************************ 00:05:47.163 START TEST rpc_daemon_integrity 00:05:47.163 ************************************ 00:05:47.163 01:40:32 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:47.163 01:40:32 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:47.163 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.163 01:40:32 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:47.163 01:40:32 -- rpc/rpc.sh@13 -- # jq length 00:05:47.163 01:40:32 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:47.163 01:40:32 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:47.163 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.163 01:40:32 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:47.163 01:40:32 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:47.163 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.163 01:40:32 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:47.163 { 00:05:47.163 "name": "Malloc2", 00:05:47.163 "aliases": [ 00:05:47.163 "1c21a9fc-f312-4b42-92cd-edc077e828a3" 00:05:47.163 ], 00:05:47.163 "product_name": "Malloc disk", 00:05:47.163 "block_size": 512, 00:05:47.163 "num_blocks": 16384, 00:05:47.163 "uuid": "1c21a9fc-f312-4b42-92cd-edc077e828a3", 00:05:47.163 "assigned_rate_limits": { 00:05:47.163 "rw_ios_per_sec": 0, 00:05:47.163 "rw_mbytes_per_sec": 0, 00:05:47.163 "r_mbytes_per_sec": 0, 00:05:47.163 "w_mbytes_per_sec": 0 00:05:47.163 }, 00:05:47.163 "claimed": false, 00:05:47.163 "zoned": false, 00:05:47.163 "supported_io_types": { 00:05:47.163 "read": true, 00:05:47.163 "write": true, 00:05:47.163 "unmap": true, 00:05:47.163 "write_zeroes": true, 00:05:47.163 "flush": true, 00:05:47.163 "reset": true, 00:05:47.163 "compare": false, 00:05:47.163 "compare_and_write": false, 00:05:47.163 "abort": true, 00:05:47.163 "nvme_admin": false, 00:05:47.163 "nvme_io": false 00:05:47.163 }, 00:05:47.163 "memory_domains": [ 00:05:47.163 { 00:05:47.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.163 "dma_device_type": 2 00:05:47.163 } 00:05:47.163 ], 00:05:47.163 "driver_specific": {} 00:05:47.163 } 00:05:47.163 ]' 00:05:47.163 01:40:32 -- rpc/rpc.sh@17 -- # jq length 00:05:47.163 01:40:32 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:47.163 01:40:32 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:47.163 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.163 [2024-04-15 01:40:32.800110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:47.163 [2024-04-15 01:40:32.800155] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:47.163 [2024-04-15 01:40:32.800180] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x18ceff0 00:05:47.163 [2024-04-15 01:40:32.800194] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:47.163 [2024-04-15 01:40:32.801528] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:47.163 [2024-04-15 01:40:32.801567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:47.163 Passthru0 00:05:47.163 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.163 01:40:32 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:47.163 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.163 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.422 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.422 01:40:32 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:47.422 { 00:05:47.422 "name": "Malloc2", 00:05:47.422 "aliases": [ 00:05:47.422 "1c21a9fc-f312-4b42-92cd-edc077e828a3" 00:05:47.422 ], 00:05:47.422 "product_name": "Malloc disk", 00:05:47.422 "block_size": 512, 00:05:47.422 "num_blocks": 16384, 00:05:47.422 "uuid": "1c21a9fc-f312-4b42-92cd-edc077e828a3", 00:05:47.422 "assigned_rate_limits": { 00:05:47.422 "rw_ios_per_sec": 0, 00:05:47.422 "rw_mbytes_per_sec": 0, 00:05:47.422 "r_mbytes_per_sec": 0, 00:05:47.422 "w_mbytes_per_sec": 0 00:05:47.422 }, 00:05:47.422 "claimed": true, 00:05:47.422 "claim_type": "exclusive_write", 00:05:47.422 "zoned": false, 00:05:47.422 "supported_io_types": { 00:05:47.422 "read": true, 00:05:47.422 "write": true, 00:05:47.422 "unmap": true, 00:05:47.422 "write_zeroes": true, 00:05:47.422 "flush": true, 00:05:47.422 "reset": true, 00:05:47.422 "compare": false, 00:05:47.422 "compare_and_write": false, 00:05:47.422 "abort": true, 00:05:47.422 "nvme_admin": false, 00:05:47.422 "nvme_io": false 00:05:47.422 }, 00:05:47.422 "memory_domains": [ 00:05:47.422 { 00:05:47.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.422 "dma_device_type": 2 00:05:47.422 } 00:05:47.422 ], 00:05:47.422 "driver_specific": {} 00:05:47.422 }, 00:05:47.422 { 00:05:47.422 "name": "Passthru0", 00:05:47.422 "aliases": [ 00:05:47.422 "dd651d88-9147-558f-ab86-388a20fbeb74" 00:05:47.422 ], 00:05:47.422 "product_name": "passthru", 00:05:47.422 "block_size": 512, 00:05:47.422 "num_blocks": 16384, 00:05:47.422 "uuid": "dd651d88-9147-558f-ab86-388a20fbeb74", 00:05:47.422 "assigned_rate_limits": { 00:05:47.422 "rw_ios_per_sec": 0, 00:05:47.422 "rw_mbytes_per_sec": 0, 00:05:47.422 "r_mbytes_per_sec": 0, 00:05:47.422 "w_mbytes_per_sec": 0 00:05:47.422 }, 00:05:47.422 "claimed": false, 00:05:47.422 "zoned": false, 00:05:47.422 "supported_io_types": { 00:05:47.422 "read": true, 00:05:47.422 "write": true, 00:05:47.422 "unmap": true, 00:05:47.422 "write_zeroes": true, 00:05:47.422 "flush": true, 00:05:47.422 "reset": true, 00:05:47.422 "compare": false, 00:05:47.422 "compare_and_write": false, 00:05:47.422 "abort": true, 00:05:47.422 "nvme_admin": false, 00:05:47.422 "nvme_io": false 00:05:47.422 }, 00:05:47.422 "memory_domains": [ 00:05:47.422 { 00:05:47.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:47.422 "dma_device_type": 2 00:05:47.422 } 00:05:47.422 ], 00:05:47.422 "driver_specific": { 00:05:47.422 "passthru": { 00:05:47.422 "name": "Passthru0", 00:05:47.422 "base_bdev_name": "Malloc2" 00:05:47.422 } 00:05:47.422 } 00:05:47.422 } 00:05:47.422 ]' 00:05:47.422 01:40:32 -- rpc/rpc.sh@21 -- # jq length 00:05:47.422 01:40:32 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:47.422 01:40:32 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:47.422 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.422 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.422 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.422 01:40:32 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:47.422 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.422 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.422 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.422 01:40:32 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:47.422 01:40:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:47.422 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.422 01:40:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:47.422 01:40:32 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:47.422 01:40:32 -- rpc/rpc.sh@26 -- # jq length 00:05:47.422 01:40:32 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:47.422 00:05:47.422 real 0m0.223s 00:05:47.422 user 0m0.143s 00:05:47.422 sys 0m0.024s 00:05:47.422 01:40:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.422 01:40:32 -- common/autotest_common.sh@10 -- # set +x 00:05:47.422 ************************************ 00:05:47.422 END TEST rpc_daemon_integrity 00:05:47.422 ************************************ 00:05:47.422 01:40:32 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:47.422 01:40:32 -- rpc/rpc.sh@84 -- # killprocess 2035838 00:05:47.422 01:40:32 -- common/autotest_common.sh@926 -- # '[' -z 2035838 ']' 00:05:47.422 01:40:32 -- common/autotest_common.sh@930 -- # kill -0 2035838 00:05:47.422 01:40:32 -- common/autotest_common.sh@931 -- # uname 00:05:47.422 01:40:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:47.422 01:40:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2035838 00:05:47.422 01:40:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:47.422 01:40:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:47.422 01:40:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2035838' 00:05:47.422 killing process with pid 2035838 00:05:47.422 01:40:32 -- common/autotest_common.sh@945 -- # kill 2035838 00:05:47.422 01:40:32 -- common/autotest_common.sh@950 -- # wait 2035838 00:05:47.987 00:05:47.987 real 0m2.319s 00:05:47.987 user 0m2.972s 00:05:47.987 sys 0m0.561s 00:05:47.987 01:40:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.987 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 ************************************ 00:05:47.987 END TEST rpc 00:05:47.987 ************************************ 00:05:47.987 01:40:33 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.987 01:40:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.987 01:40:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.987 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 ************************************ 00:05:47.987 START TEST rpc_client 00:05:47.987 ************************************ 00:05:47.987 01:40:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:47.987 * Looking for test storage... 00:05:47.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:47.987 01:40:33 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:47.987 OK 00:05:47.987 01:40:33 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:47.987 00:05:47.987 real 0m0.064s 00:05:47.987 user 0m0.027s 00:05:47.987 sys 0m0.042s 00:05:47.987 01:40:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:47.987 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 ************************************ 00:05:47.987 END TEST rpc_client 00:05:47.987 ************************************ 00:05:47.987 01:40:33 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.987 01:40:33 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:47.987 01:40:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:47.987 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.987 ************************************ 00:05:47.987 START TEST json_config 00:05:47.987 ************************************ 00:05:47.987 01:40:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:47.987 01:40:33 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:47.987 01:40:33 -- nvmf/common.sh@7 -- # uname -s 00:05:47.987 01:40:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:47.987 01:40:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:47.987 01:40:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:47.987 01:40:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:47.987 01:40:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:47.987 01:40:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:47.987 01:40:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:47.987 01:40:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:47.987 01:40:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:47.987 01:40:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:47.987 01:40:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.987 01:40:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:47.987 01:40:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:47.987 01:40:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:47.987 01:40:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:47.987 01:40:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:47.987 01:40:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:47.987 01:40:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:47.987 01:40:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:47.988 01:40:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.988 01:40:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.988 01:40:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.988 01:40:33 -- paths/export.sh@5 -- # export PATH 00:05:47.988 01:40:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:47.988 01:40:33 -- nvmf/common.sh@46 -- # : 0 00:05:47.988 01:40:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:47.988 01:40:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:47.988 01:40:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:47.988 01:40:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:47.988 01:40:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:47.988 01:40:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:47.988 01:40:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:47.988 01:40:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:47.988 01:40:33 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:47.988 01:40:33 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:47.988 01:40:33 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:47.988 01:40:33 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:47.988 01:40:33 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:47.988 01:40:33 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:47.988 01:40:33 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:47.988 01:40:33 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:47.988 01:40:33 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:47.988 01:40:33 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:47.988 01:40:33 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:47.988 01:40:33 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:47.988 INFO: JSON configuration test init 00:05:47.988 01:40:33 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:47.988 01:40:33 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:47.988 01:40:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.988 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 01:40:33 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:47.988 01:40:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:47.988 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 01:40:33 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:47.988 01:40:33 -- json_config/json_config.sh@98 -- # local app=target 00:05:47.988 01:40:33 -- json_config/json_config.sh@99 -- # shift 00:05:47.988 01:40:33 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:47.988 01:40:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:47.988 01:40:33 -- json_config/json_config.sh@111 -- # app_pid[$app]=2036318 00:05:47.988 01:40:33 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:47.988 01:40:33 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:47.988 Waiting for target to run... 00:05:47.988 01:40:33 -- json_config/json_config.sh@114 -- # waitforlisten 2036318 /var/tmp/spdk_tgt.sock 00:05:47.988 01:40:33 -- common/autotest_common.sh@819 -- # '[' -z 2036318 ']' 00:05:47.988 01:40:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:47.988 01:40:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:47.988 01:40:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:47.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:47.988 01:40:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:47.988 01:40:33 -- common/autotest_common.sh@10 -- # set +x 00:05:47.988 [2024-04-15 01:40:33.564135] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:47.988 [2024-04-15 01:40:33.564232] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2036318 ] 00:05:47.988 EAL: No free 2048 kB hugepages reported on node 1 00:05:48.559 [2024-04-15 01:40:33.901461] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.559 [2024-04-15 01:40:33.963546] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.559 [2024-04-15 01:40:33.963722] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.150 01:40:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.150 01:40:34 -- common/autotest_common.sh@852 -- # return 0 00:05:49.150 01:40:34 -- json_config/json_config.sh@115 -- # echo '' 00:05:49.150 00:05:49.150 01:40:34 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:49.150 01:40:34 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:49.150 01:40:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:49.150 01:40:34 -- common/autotest_common.sh@10 -- # set +x 00:05:49.150 01:40:34 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:49.150 01:40:34 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:49.150 01:40:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:49.150 01:40:34 -- common/autotest_common.sh@10 -- # set +x 00:05:49.150 01:40:34 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.150 01:40:34 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:49.150 01:40:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:52.433 01:40:37 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:52.433 01:40:37 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:52.433 01:40:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.433 01:40:37 -- common/autotest_common.sh@10 -- # set +x 00:05:52.433 01:40:37 -- json_config/json_config.sh@48 -- # local ret=0 00:05:52.433 01:40:37 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:52.433 01:40:37 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:52.433 01:40:37 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:52.433 01:40:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:52.433 01:40:37 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:52.433 01:40:37 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:52.433 01:40:37 -- json_config/json_config.sh@51 -- # local get_types 00:05:52.433 01:40:37 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:52.433 01:40:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:52.433 01:40:37 -- common/autotest_common.sh@10 -- # set +x 00:05:52.433 01:40:37 -- json_config/json_config.sh@58 -- # return 0 00:05:52.433 01:40:37 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:52.433 01:40:37 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:52.433 01:40:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:52.433 01:40:37 -- common/autotest_common.sh@10 -- # set +x 00:05:52.433 01:40:37 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:52.433 01:40:37 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:52.433 01:40:37 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.433 01:40:37 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:52.691 MallocForNvmf0 00:05:52.691 01:40:38 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.691 01:40:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:52.949 MallocForNvmf1 00:05:52.949 01:40:38 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:52.949 01:40:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:53.207 [2024-04-15 01:40:38.626573] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:53.207 01:40:38 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.207 01:40:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:53.465 01:40:38 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.465 01:40:38 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:53.465 01:40:39 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.465 01:40:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:53.723 01:40:39 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.724 01:40:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:53.982 [2024-04-15 01:40:39.573689] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:53.982 01:40:39 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:53.982 01:40:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.982 01:40:39 -- common/autotest_common.sh@10 -- # set +x 00:05:53.982 01:40:39 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:53.982 01:40:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:53.982 01:40:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.241 01:40:39 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:54.241 01:40:39 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.241 01:40:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:54.241 MallocBdevForConfigChangeCheck 00:05:54.241 01:40:39 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:54.241 01:40:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:54.241 01:40:39 -- common/autotest_common.sh@10 -- # set +x 00:05:54.499 01:40:39 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:54.499 01:40:39 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:54.756 01:40:40 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:54.756 INFO: shutting down applications... 00:05:54.756 01:40:40 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:54.756 01:40:40 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:54.756 01:40:40 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:54.756 01:40:40 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:56.654 Calling clear_iscsi_subsystem 00:05:56.654 Calling clear_nvmf_subsystem 00:05:56.654 Calling clear_nbd_subsystem 00:05:56.654 Calling clear_ublk_subsystem 00:05:56.654 Calling clear_vhost_blk_subsystem 00:05:56.654 Calling clear_vhost_scsi_subsystem 00:05:56.654 Calling clear_scheduler_subsystem 00:05:56.654 Calling clear_bdev_subsystem 00:05:56.654 Calling clear_accel_subsystem 00:05:56.654 Calling clear_vmd_subsystem 00:05:56.654 Calling clear_sock_subsystem 00:05:56.654 Calling clear_iobuf_subsystem 00:05:56.654 01:40:41 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:56.654 01:40:41 -- json_config/json_config.sh@396 -- # count=100 00:05:56.654 01:40:41 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:56.654 01:40:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:56.654 01:40:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:56.654 01:40:41 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:56.654 01:40:42 -- json_config/json_config.sh@398 -- # break 00:05:56.654 01:40:42 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:56.654 01:40:42 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:56.654 01:40:42 -- json_config/json_config.sh@120 -- # local app=target 00:05:56.654 01:40:42 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:56.654 01:40:42 -- json_config/json_config.sh@124 -- # [[ -n 2036318 ]] 00:05:56.654 01:40:42 -- json_config/json_config.sh@127 -- # kill -SIGINT 2036318 00:05:56.654 01:40:42 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:56.654 01:40:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:56.654 01:40:42 -- json_config/json_config.sh@130 -- # kill -0 2036318 00:05:56.654 01:40:42 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:57.223 01:40:42 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:57.223 01:40:42 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:57.223 01:40:42 -- json_config/json_config.sh@130 -- # kill -0 2036318 00:05:57.223 01:40:42 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:57.223 01:40:42 -- json_config/json_config.sh@132 -- # break 00:05:57.223 01:40:42 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:57.223 01:40:42 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:57.223 SPDK target shutdown done 00:05:57.223 01:40:42 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:57.223 INFO: relaunching applications... 00:05:57.223 01:40:42 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.223 01:40:42 -- json_config/json_config.sh@98 -- # local app=target 00:05:57.223 01:40:42 -- json_config/json_config.sh@99 -- # shift 00:05:57.223 01:40:42 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:57.223 01:40:42 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:57.223 01:40:42 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:57.223 01:40:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:57.223 01:40:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:57.223 01:40:42 -- json_config/json_config.sh@111 -- # app_pid[$app]=2037542 00:05:57.223 01:40:42 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:57.223 01:40:42 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:57.223 Waiting for target to run... 00:05:57.223 01:40:42 -- json_config/json_config.sh@114 -- # waitforlisten 2037542 /var/tmp/spdk_tgt.sock 00:05:57.223 01:40:42 -- common/autotest_common.sh@819 -- # '[' -z 2037542 ']' 00:05:57.223 01:40:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:57.223 01:40:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:57.223 01:40:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:57.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:57.223 01:40:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:57.223 01:40:42 -- common/autotest_common.sh@10 -- # set +x 00:05:57.223 [2024-04-15 01:40:42.835329] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:05:57.223 [2024-04-15 01:40:42.835428] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2037542 ] 00:05:57.223 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.789 [2024-04-15 01:40:43.332037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.789 [2024-04-15 01:40:43.411016] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:57.789 [2024-04-15 01:40:43.411209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.077 [2024-04-15 01:40:46.439883] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:01.077 [2024-04-15 01:40:46.472343] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:01.077 01:40:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.077 01:40:46 -- common/autotest_common.sh@852 -- # return 0 00:06:01.077 01:40:46 -- json_config/json_config.sh@115 -- # echo '' 00:06:01.077 00:06:01.077 01:40:46 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:01.077 01:40:46 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:01.077 INFO: Checking if target configuration is the same... 00:06:01.077 01:40:46 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.077 01:40:46 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:01.077 01:40:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.077 + '[' 2 -ne 2 ']' 00:06:01.077 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:01.077 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:01.077 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.077 +++ basename /dev/fd/62 00:06:01.077 ++ mktemp /tmp/62.XXX 00:06:01.077 + tmp_file_1=/tmp/62.BPP 00:06:01.077 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.077 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:01.077 + tmp_file_2=/tmp/spdk_tgt_config.json.mZ9 00:06:01.077 + ret=0 00:06:01.077 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.642 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:01.642 + diff -u /tmp/62.BPP /tmp/spdk_tgt_config.json.mZ9 00:06:01.642 + echo 'INFO: JSON config files are the same' 00:06:01.642 INFO: JSON config files are the same 00:06:01.642 + rm /tmp/62.BPP /tmp/spdk_tgt_config.json.mZ9 00:06:01.642 + exit 0 00:06:01.642 01:40:47 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:01.642 01:40:47 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:01.642 INFO: changing configuration and checking if this can be detected... 00:06:01.642 01:40:47 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:01.642 01:40:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:01.899 01:40:47 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.899 01:40:47 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:01.899 01:40:47 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:01.899 + '[' 2 -ne 2 ']' 00:06:01.899 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:01.899 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:01.899 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:01.899 +++ basename /dev/fd/62 00:06:01.899 ++ mktemp /tmp/62.XXX 00:06:01.899 + tmp_file_1=/tmp/62.tV4 00:06:01.899 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.899 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:01.899 + tmp_file_2=/tmp/spdk_tgt_config.json.bUw 00:06:01.899 + ret=0 00:06:01.899 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.157 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:02.157 + diff -u /tmp/62.tV4 /tmp/spdk_tgt_config.json.bUw 00:06:02.157 + ret=1 00:06:02.157 + echo '=== Start of file: /tmp/62.tV4 ===' 00:06:02.157 + cat /tmp/62.tV4 00:06:02.157 + echo '=== End of file: /tmp/62.tV4 ===' 00:06:02.157 + echo '' 00:06:02.157 + echo '=== Start of file: /tmp/spdk_tgt_config.json.bUw ===' 00:06:02.157 + cat /tmp/spdk_tgt_config.json.bUw 00:06:02.157 + echo '=== End of file: /tmp/spdk_tgt_config.json.bUw ===' 00:06:02.157 + echo '' 00:06:02.157 + rm /tmp/62.tV4 /tmp/spdk_tgt_config.json.bUw 00:06:02.157 + exit 1 00:06:02.157 01:40:47 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:02.157 INFO: configuration change detected. 00:06:02.157 01:40:47 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:02.157 01:40:47 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:02.157 01:40:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:02.157 01:40:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.416 01:40:47 -- json_config/json_config.sh@360 -- # local ret=0 00:06:02.416 01:40:47 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:02.416 01:40:47 -- json_config/json_config.sh@370 -- # [[ -n 2037542 ]] 00:06:02.416 01:40:47 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:02.416 01:40:47 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:02.416 01:40:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:02.416 01:40:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.416 01:40:47 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:02.416 01:40:47 -- json_config/json_config.sh@246 -- # uname -s 00:06:02.416 01:40:47 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:02.416 01:40:47 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:02.416 01:40:47 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:02.416 01:40:47 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:02.416 01:40:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:02.416 01:40:47 -- common/autotest_common.sh@10 -- # set +x 00:06:02.416 01:40:47 -- json_config/json_config.sh@376 -- # killprocess 2037542 00:06:02.416 01:40:47 -- common/autotest_common.sh@926 -- # '[' -z 2037542 ']' 00:06:02.416 01:40:47 -- common/autotest_common.sh@930 -- # kill -0 2037542 00:06:02.416 01:40:47 -- common/autotest_common.sh@931 -- # uname 00:06:02.416 01:40:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:02.416 01:40:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2037542 00:06:02.416 01:40:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:02.416 01:40:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:02.416 01:40:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2037542' 00:06:02.416 killing process with pid 2037542 00:06:02.416 01:40:47 -- common/autotest_common.sh@945 -- # kill 2037542 00:06:02.416 01:40:47 -- common/autotest_common.sh@950 -- # wait 2037542 00:06:04.315 01:40:49 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.315 01:40:49 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:04.315 01:40:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:04.315 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.315 01:40:49 -- json_config/json_config.sh@381 -- # return 0 00:06:04.315 01:40:49 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:04.315 INFO: Success 00:06:04.315 00:06:04.315 real 0m16.063s 00:06:04.315 user 0m18.307s 00:06:04.315 sys 0m2.093s 00:06:04.315 01:40:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.315 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.315 ************************************ 00:06:04.315 END TEST json_config 00:06:04.315 ************************************ 00:06:04.315 01:40:49 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:04.315 01:40:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.315 01:40:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.315 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.315 ************************************ 00:06:04.315 START TEST json_config_extra_key 00:06:04.315 ************************************ 00:06:04.315 01:40:49 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:04.315 01:40:49 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:04.315 01:40:49 -- nvmf/common.sh@7 -- # uname -s 00:06:04.315 01:40:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:04.315 01:40:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:04.315 01:40:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:04.315 01:40:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:04.315 01:40:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:04.315 01:40:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:04.315 01:40:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:04.315 01:40:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:04.315 01:40:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:04.315 01:40:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:04.315 01:40:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:04.315 01:40:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:04.315 01:40:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:04.315 01:40:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:04.315 01:40:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:04.315 01:40:49 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:04.315 01:40:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:04.315 01:40:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:04.315 01:40:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:04.315 01:40:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.315 01:40:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.315 01:40:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.315 01:40:49 -- paths/export.sh@5 -- # export PATH 00:06:04.316 01:40:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:04.316 01:40:49 -- nvmf/common.sh@46 -- # : 0 00:06:04.316 01:40:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:04.316 01:40:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:04.316 01:40:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:04.316 01:40:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:04.316 01:40:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:04.316 01:40:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:04.316 01:40:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:04.316 01:40:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:04.316 INFO: launching applications... 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=2038487 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:04.316 Waiting for target to run... 00:06:04.316 01:40:49 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 2038487 /var/tmp/spdk_tgt.sock 00:06:04.316 01:40:49 -- common/autotest_common.sh@819 -- # '[' -z 2038487 ']' 00:06:04.316 01:40:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:04.316 01:40:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.316 01:40:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:04.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:04.316 01:40:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.316 01:40:49 -- common/autotest_common.sh@10 -- # set +x 00:06:04.316 [2024-04-15 01:40:49.651781] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:04.316 [2024-04-15 01:40:49.651872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038487 ] 00:06:04.316 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.574 [2024-04-15 01:40:50.005500] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.574 [2024-04-15 01:40:50.076727] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.574 [2024-04-15 01:40:50.076902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.138 01:40:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.138 01:40:50 -- common/autotest_common.sh@852 -- # return 0 00:06:05.138 01:40:50 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:05.138 00:06:05.138 01:40:50 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:05.138 INFO: shutting down applications... 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 2038487 ]] 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 2038487 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2038487 00:06:05.139 01:40:50 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@50 -- # kill -0 2038487 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:05.733 SPDK target shutdown done 00:06:05.733 01:40:51 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:05.733 Success 00:06:05.733 00:06:05.733 real 0m1.546s 00:06:05.733 user 0m1.515s 00:06:05.733 sys 0m0.412s 00:06:05.733 01:40:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.733 01:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.733 ************************************ 00:06:05.733 END TEST json_config_extra_key 00:06:05.733 ************************************ 00:06:05.733 01:40:51 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.733 01:40:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.733 01:40:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.733 01:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.733 ************************************ 00:06:05.733 START TEST alias_rpc 00:06:05.733 ************************************ 00:06:05.733 01:40:51 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:05.733 * Looking for test storage... 00:06:05.733 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:05.733 01:40:51 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:05.733 01:40:51 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2038788 00:06:05.733 01:40:51 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:05.733 01:40:51 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2038788 00:06:05.733 01:40:51 -- common/autotest_common.sh@819 -- # '[' -z 2038788 ']' 00:06:05.733 01:40:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.733 01:40:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.733 01:40:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.733 01:40:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.733 01:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:05.733 [2024-04-15 01:40:51.219605] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:05.733 [2024-04-15 01:40:51.219692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038788 ] 00:06:05.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.733 [2024-04-15 01:40:51.277361] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.999 [2024-04-15 01:40:51.362288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:05.999 [2024-04-15 01:40:51.362459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.565 01:40:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.565 01:40:52 -- common/autotest_common.sh@852 -- # return 0 00:06:06.565 01:40:52 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:06.822 01:40:52 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2038788 00:06:06.822 01:40:52 -- common/autotest_common.sh@926 -- # '[' -z 2038788 ']' 00:06:06.822 01:40:52 -- common/autotest_common.sh@930 -- # kill -0 2038788 00:06:06.822 01:40:52 -- common/autotest_common.sh@931 -- # uname 00:06:06.822 01:40:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:06.822 01:40:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2038788 00:06:06.822 01:40:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:06.822 01:40:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:06.822 01:40:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2038788' 00:06:06.822 killing process with pid 2038788 00:06:06.822 01:40:52 -- common/autotest_common.sh@945 -- # kill 2038788 00:06:06.822 01:40:52 -- common/autotest_common.sh@950 -- # wait 2038788 00:06:07.388 00:06:07.388 real 0m1.707s 00:06:07.388 user 0m1.965s 00:06:07.388 sys 0m0.454s 00:06:07.388 01:40:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.388 01:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 ************************************ 00:06:07.388 END TEST alias_rpc 00:06:07.388 ************************************ 00:06:07.388 01:40:52 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:06:07.388 01:40:52 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:07.388 01:40:52 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:07.388 01:40:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:07.388 01:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 ************************************ 00:06:07.388 START TEST spdkcli_tcp 00:06:07.388 ************************************ 00:06:07.388 01:40:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:07.388 * Looking for test storage... 00:06:07.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:07.388 01:40:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:07.388 01:40:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:07.388 01:40:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:06:07.388 01:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2038996 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:07.388 01:40:52 -- spdkcli/tcp.sh@27 -- # waitforlisten 2038996 00:06:07.388 01:40:52 -- common/autotest_common.sh@819 -- # '[' -z 2038996 ']' 00:06:07.389 01:40:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.389 01:40:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:07.389 01:40:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.389 01:40:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:07.389 01:40:52 -- common/autotest_common.sh@10 -- # set +x 00:06:07.389 [2024-04-15 01:40:52.958152] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:07.389 [2024-04-15 01:40:52.958246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2038996 ] 00:06:07.389 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.389 [2024-04-15 01:40:53.015188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.647 [2024-04-15 01:40:53.098888] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:07.647 [2024-04-15 01:40:53.099182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.647 [2024-04-15 01:40:53.099187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.581 01:40:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:08.581 01:40:53 -- common/autotest_common.sh@852 -- # return 0 00:06:08.581 01:40:53 -- spdkcli/tcp.sh@31 -- # socat_pid=2039136 00:06:08.581 01:40:53 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:08.581 01:40:53 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:08.581 [ 00:06:08.581 "bdev_malloc_delete", 00:06:08.581 "bdev_malloc_create", 00:06:08.581 "bdev_null_resize", 00:06:08.581 "bdev_null_delete", 00:06:08.581 "bdev_null_create", 00:06:08.581 "bdev_nvme_cuse_unregister", 00:06:08.581 "bdev_nvme_cuse_register", 00:06:08.581 "bdev_opal_new_user", 00:06:08.581 "bdev_opal_set_lock_state", 00:06:08.581 "bdev_opal_delete", 00:06:08.581 "bdev_opal_get_info", 00:06:08.581 "bdev_opal_create", 00:06:08.581 "bdev_nvme_opal_revert", 00:06:08.581 "bdev_nvme_opal_init", 00:06:08.581 "bdev_nvme_send_cmd", 00:06:08.581 "bdev_nvme_get_path_iostat", 00:06:08.581 "bdev_nvme_get_mdns_discovery_info", 00:06:08.581 "bdev_nvme_stop_mdns_discovery", 00:06:08.581 "bdev_nvme_start_mdns_discovery", 00:06:08.581 "bdev_nvme_set_multipath_policy", 00:06:08.581 "bdev_nvme_set_preferred_path", 00:06:08.581 "bdev_nvme_get_io_paths", 00:06:08.581 "bdev_nvme_remove_error_injection", 00:06:08.581 "bdev_nvme_add_error_injection", 00:06:08.581 "bdev_nvme_get_discovery_info", 00:06:08.581 "bdev_nvme_stop_discovery", 00:06:08.581 "bdev_nvme_start_discovery", 00:06:08.581 "bdev_nvme_get_controller_health_info", 00:06:08.581 "bdev_nvme_disable_controller", 00:06:08.581 "bdev_nvme_enable_controller", 00:06:08.581 "bdev_nvme_reset_controller", 00:06:08.581 "bdev_nvme_get_transport_statistics", 00:06:08.581 "bdev_nvme_apply_firmware", 00:06:08.581 "bdev_nvme_detach_controller", 00:06:08.581 "bdev_nvme_get_controllers", 00:06:08.581 "bdev_nvme_attach_controller", 00:06:08.581 "bdev_nvme_set_hotplug", 00:06:08.581 "bdev_nvme_set_options", 00:06:08.581 "bdev_passthru_delete", 00:06:08.581 "bdev_passthru_create", 00:06:08.581 "bdev_lvol_grow_lvstore", 00:06:08.581 "bdev_lvol_get_lvols", 00:06:08.581 "bdev_lvol_get_lvstores", 00:06:08.581 "bdev_lvol_delete", 00:06:08.581 "bdev_lvol_set_read_only", 00:06:08.581 "bdev_lvol_resize", 00:06:08.581 "bdev_lvol_decouple_parent", 00:06:08.581 "bdev_lvol_inflate", 00:06:08.581 "bdev_lvol_rename", 00:06:08.581 "bdev_lvol_clone_bdev", 00:06:08.581 "bdev_lvol_clone", 00:06:08.581 "bdev_lvol_snapshot", 00:06:08.581 "bdev_lvol_create", 00:06:08.581 "bdev_lvol_delete_lvstore", 00:06:08.581 "bdev_lvol_rename_lvstore", 00:06:08.581 "bdev_lvol_create_lvstore", 00:06:08.581 "bdev_raid_set_options", 00:06:08.581 "bdev_raid_remove_base_bdev", 00:06:08.581 "bdev_raid_add_base_bdev", 00:06:08.581 "bdev_raid_delete", 00:06:08.581 "bdev_raid_create", 00:06:08.581 "bdev_raid_get_bdevs", 00:06:08.581 "bdev_error_inject_error", 00:06:08.581 "bdev_error_delete", 00:06:08.581 "bdev_error_create", 00:06:08.581 "bdev_split_delete", 00:06:08.581 "bdev_split_create", 00:06:08.581 "bdev_delay_delete", 00:06:08.581 "bdev_delay_create", 00:06:08.581 "bdev_delay_update_latency", 00:06:08.581 "bdev_zone_block_delete", 00:06:08.581 "bdev_zone_block_create", 00:06:08.581 "blobfs_create", 00:06:08.581 "blobfs_detect", 00:06:08.581 "blobfs_set_cache_size", 00:06:08.581 "bdev_aio_delete", 00:06:08.581 "bdev_aio_rescan", 00:06:08.581 "bdev_aio_create", 00:06:08.581 "bdev_ftl_set_property", 00:06:08.581 "bdev_ftl_get_properties", 00:06:08.581 "bdev_ftl_get_stats", 00:06:08.581 "bdev_ftl_unmap", 00:06:08.581 "bdev_ftl_unload", 00:06:08.581 "bdev_ftl_delete", 00:06:08.581 "bdev_ftl_load", 00:06:08.581 "bdev_ftl_create", 00:06:08.581 "bdev_virtio_attach_controller", 00:06:08.581 "bdev_virtio_scsi_get_devices", 00:06:08.581 "bdev_virtio_detach_controller", 00:06:08.581 "bdev_virtio_blk_set_hotplug", 00:06:08.581 "bdev_iscsi_delete", 00:06:08.581 "bdev_iscsi_create", 00:06:08.581 "bdev_iscsi_set_options", 00:06:08.581 "accel_error_inject_error", 00:06:08.581 "ioat_scan_accel_module", 00:06:08.581 "dsa_scan_accel_module", 00:06:08.581 "iaa_scan_accel_module", 00:06:08.581 "vfu_virtio_create_scsi_endpoint", 00:06:08.581 "vfu_virtio_scsi_remove_target", 00:06:08.582 "vfu_virtio_scsi_add_target", 00:06:08.582 "vfu_virtio_create_blk_endpoint", 00:06:08.582 "vfu_virtio_delete_endpoint", 00:06:08.582 "iscsi_set_options", 00:06:08.582 "iscsi_get_auth_groups", 00:06:08.582 "iscsi_auth_group_remove_secret", 00:06:08.582 "iscsi_auth_group_add_secret", 00:06:08.582 "iscsi_delete_auth_group", 00:06:08.582 "iscsi_create_auth_group", 00:06:08.582 "iscsi_set_discovery_auth", 00:06:08.582 "iscsi_get_options", 00:06:08.582 "iscsi_target_node_request_logout", 00:06:08.582 "iscsi_target_node_set_redirect", 00:06:08.582 "iscsi_target_node_set_auth", 00:06:08.582 "iscsi_target_node_add_lun", 00:06:08.582 "iscsi_get_connections", 00:06:08.582 "iscsi_portal_group_set_auth", 00:06:08.582 "iscsi_start_portal_group", 00:06:08.582 "iscsi_delete_portal_group", 00:06:08.582 "iscsi_create_portal_group", 00:06:08.582 "iscsi_get_portal_groups", 00:06:08.582 "iscsi_delete_target_node", 00:06:08.582 "iscsi_target_node_remove_pg_ig_maps", 00:06:08.582 "iscsi_target_node_add_pg_ig_maps", 00:06:08.582 "iscsi_create_target_node", 00:06:08.582 "iscsi_get_target_nodes", 00:06:08.582 "iscsi_delete_initiator_group", 00:06:08.582 "iscsi_initiator_group_remove_initiators", 00:06:08.582 "iscsi_initiator_group_add_initiators", 00:06:08.582 "iscsi_create_initiator_group", 00:06:08.582 "iscsi_get_initiator_groups", 00:06:08.582 "nvmf_set_crdt", 00:06:08.582 "nvmf_set_config", 00:06:08.582 "nvmf_set_max_subsystems", 00:06:08.582 "nvmf_subsystem_get_listeners", 00:06:08.582 "nvmf_subsystem_get_qpairs", 00:06:08.582 "nvmf_subsystem_get_controllers", 00:06:08.582 "nvmf_get_stats", 00:06:08.582 "nvmf_get_transports", 00:06:08.582 "nvmf_create_transport", 00:06:08.582 "nvmf_get_targets", 00:06:08.582 "nvmf_delete_target", 00:06:08.582 "nvmf_create_target", 00:06:08.582 "nvmf_subsystem_allow_any_host", 00:06:08.582 "nvmf_subsystem_remove_host", 00:06:08.582 "nvmf_subsystem_add_host", 00:06:08.582 "nvmf_subsystem_remove_ns", 00:06:08.582 "nvmf_subsystem_add_ns", 00:06:08.582 "nvmf_subsystem_listener_set_ana_state", 00:06:08.582 "nvmf_discovery_get_referrals", 00:06:08.582 "nvmf_discovery_remove_referral", 00:06:08.582 "nvmf_discovery_add_referral", 00:06:08.582 "nvmf_subsystem_remove_listener", 00:06:08.582 "nvmf_subsystem_add_listener", 00:06:08.582 "nvmf_delete_subsystem", 00:06:08.582 "nvmf_create_subsystem", 00:06:08.582 "nvmf_get_subsystems", 00:06:08.582 "env_dpdk_get_mem_stats", 00:06:08.582 "nbd_get_disks", 00:06:08.582 "nbd_stop_disk", 00:06:08.582 "nbd_start_disk", 00:06:08.582 "ublk_recover_disk", 00:06:08.582 "ublk_get_disks", 00:06:08.582 "ublk_stop_disk", 00:06:08.582 "ublk_start_disk", 00:06:08.582 "ublk_destroy_target", 00:06:08.582 "ublk_create_target", 00:06:08.582 "virtio_blk_create_transport", 00:06:08.582 "virtio_blk_get_transports", 00:06:08.582 "vhost_controller_set_coalescing", 00:06:08.582 "vhost_get_controllers", 00:06:08.582 "vhost_delete_controller", 00:06:08.582 "vhost_create_blk_controller", 00:06:08.582 "vhost_scsi_controller_remove_target", 00:06:08.582 "vhost_scsi_controller_add_target", 00:06:08.582 "vhost_start_scsi_controller", 00:06:08.582 "vhost_create_scsi_controller", 00:06:08.582 "thread_set_cpumask", 00:06:08.582 "framework_get_scheduler", 00:06:08.582 "framework_set_scheduler", 00:06:08.582 "framework_get_reactors", 00:06:08.582 "thread_get_io_channels", 00:06:08.582 "thread_get_pollers", 00:06:08.582 "thread_get_stats", 00:06:08.582 "framework_monitor_context_switch", 00:06:08.582 "spdk_kill_instance", 00:06:08.582 "log_enable_timestamps", 00:06:08.582 "log_get_flags", 00:06:08.582 "log_clear_flag", 00:06:08.582 "log_set_flag", 00:06:08.582 "log_get_level", 00:06:08.582 "log_set_level", 00:06:08.582 "log_get_print_level", 00:06:08.582 "log_set_print_level", 00:06:08.582 "framework_enable_cpumask_locks", 00:06:08.582 "framework_disable_cpumask_locks", 00:06:08.582 "framework_wait_init", 00:06:08.582 "framework_start_init", 00:06:08.582 "scsi_get_devices", 00:06:08.582 "bdev_get_histogram", 00:06:08.582 "bdev_enable_histogram", 00:06:08.582 "bdev_set_qos_limit", 00:06:08.582 "bdev_set_qd_sampling_period", 00:06:08.582 "bdev_get_bdevs", 00:06:08.582 "bdev_reset_iostat", 00:06:08.582 "bdev_get_iostat", 00:06:08.582 "bdev_examine", 00:06:08.582 "bdev_wait_for_examine", 00:06:08.582 "bdev_set_options", 00:06:08.582 "notify_get_notifications", 00:06:08.582 "notify_get_types", 00:06:08.582 "accel_get_stats", 00:06:08.582 "accel_set_options", 00:06:08.582 "accel_set_driver", 00:06:08.582 "accel_crypto_key_destroy", 00:06:08.582 "accel_crypto_keys_get", 00:06:08.582 "accel_crypto_key_create", 00:06:08.582 "accel_assign_opc", 00:06:08.582 "accel_get_module_info", 00:06:08.582 "accel_get_opc_assignments", 00:06:08.582 "vmd_rescan", 00:06:08.582 "vmd_remove_device", 00:06:08.582 "vmd_enable", 00:06:08.582 "sock_set_default_impl", 00:06:08.582 "sock_impl_set_options", 00:06:08.582 "sock_impl_get_options", 00:06:08.582 "iobuf_get_stats", 00:06:08.582 "iobuf_set_options", 00:06:08.582 "framework_get_pci_devices", 00:06:08.582 "framework_get_config", 00:06:08.582 "framework_get_subsystems", 00:06:08.582 "vfu_tgt_set_base_path", 00:06:08.582 "trace_get_info", 00:06:08.582 "trace_get_tpoint_group_mask", 00:06:08.582 "trace_disable_tpoint_group", 00:06:08.582 "trace_enable_tpoint_group", 00:06:08.582 "trace_clear_tpoint_mask", 00:06:08.582 "trace_set_tpoint_mask", 00:06:08.582 "spdk_get_version", 00:06:08.582 "rpc_get_methods" 00:06:08.582 ] 00:06:08.582 01:40:54 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:08.582 01:40:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:06:08.582 01:40:54 -- common/autotest_common.sh@10 -- # set +x 00:06:08.582 01:40:54 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:08.582 01:40:54 -- spdkcli/tcp.sh@38 -- # killprocess 2038996 00:06:08.582 01:40:54 -- common/autotest_common.sh@926 -- # '[' -z 2038996 ']' 00:06:08.582 01:40:54 -- common/autotest_common.sh@930 -- # kill -0 2038996 00:06:08.582 01:40:54 -- common/autotest_common.sh@931 -- # uname 00:06:08.582 01:40:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:08.582 01:40:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2038996 00:06:08.582 01:40:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:08.582 01:40:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:08.582 01:40:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2038996' 00:06:08.582 killing process with pid 2038996 00:06:08.582 01:40:54 -- common/autotest_common.sh@945 -- # kill 2038996 00:06:08.582 01:40:54 -- common/autotest_common.sh@950 -- # wait 2038996 00:06:09.154 00:06:09.154 real 0m1.719s 00:06:09.154 user 0m3.370s 00:06:09.154 sys 0m0.467s 00:06:09.154 01:40:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.154 01:40:54 -- common/autotest_common.sh@10 -- # set +x 00:06:09.154 ************************************ 00:06:09.154 END TEST spdkcli_tcp 00:06:09.154 ************************************ 00:06:09.155 01:40:54 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:09.155 01:40:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:09.155 01:40:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.155 01:40:54 -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 ************************************ 00:06:09.155 START TEST dpdk_mem_utility 00:06:09.155 ************************************ 00:06:09.155 01:40:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:09.155 * Looking for test storage... 00:06:09.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:09.155 01:40:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:09.155 01:40:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2039326 00:06:09.155 01:40:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:09.155 01:40:54 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2039326 00:06:09.155 01:40:54 -- common/autotest_common.sh@819 -- # '[' -z 2039326 ']' 00:06:09.155 01:40:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.155 01:40:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:09.155 01:40:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.155 01:40:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:09.155 01:40:54 -- common/autotest_common.sh@10 -- # set +x 00:06:09.155 [2024-04-15 01:40:54.695568] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:09.155 [2024-04-15 01:40:54.695664] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039326 ] 00:06:09.155 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.155 [2024-04-15 01:40:54.753215] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.415 [2024-04-15 01:40:54.836625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.415 [2024-04-15 01:40:54.836808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.979 01:40:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:09.979 01:40:55 -- common/autotest_common.sh@852 -- # return 0 00:06:09.979 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:09.979 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:09.979 01:40:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:09.979 01:40:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.238 { 00:06:10.238 "filename": "/tmp/spdk_mem_dump.txt" 00:06:10.238 } 00:06:10.238 01:40:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:10.238 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:10.238 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:10.238 1 heaps totaling size 814.000000 MiB 00:06:10.238 size: 814.000000 MiB heap id: 0 00:06:10.238 end heaps---------- 00:06:10.238 8 mempools totaling size 598.116089 MiB 00:06:10.238 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:10.238 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:10.238 size: 84.521057 MiB name: bdev_io_2039326 00:06:10.238 size: 51.011292 MiB name: evtpool_2039326 00:06:10.238 size: 50.003479 MiB name: msgpool_2039326 00:06:10.238 size: 21.763794 MiB name: PDU_Pool 00:06:10.238 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:10.238 size: 0.026123 MiB name: Session_Pool 00:06:10.238 end mempools------- 00:06:10.238 6 memzones totaling size 4.142822 MiB 00:06:10.238 size: 1.000366 MiB name: RG_ring_0_2039326 00:06:10.238 size: 1.000366 MiB name: RG_ring_1_2039326 00:06:10.238 size: 1.000366 MiB name: RG_ring_4_2039326 00:06:10.238 size: 1.000366 MiB name: RG_ring_5_2039326 00:06:10.238 size: 0.125366 MiB name: RG_ring_2_2039326 00:06:10.238 size: 0.015991 MiB name: RG_ring_3_2039326 00:06:10.238 end memzones------- 00:06:10.238 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:10.238 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:10.238 list of free elements. size: 12.519348 MiB 00:06:10.238 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:10.238 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:10.238 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:10.238 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:10.238 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:10.238 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:10.238 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:10.238 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:10.238 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:10.238 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:10.238 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:10.238 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:10.238 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:10.238 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:10.238 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:10.238 list of standard malloc elements. size: 199.218079 MiB 00:06:10.238 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:10.238 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:10.238 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:10.238 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:10.238 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:10.238 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:10.238 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:10.238 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:10.238 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:10.238 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:10.238 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:10.238 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:10.238 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:10.238 list of memzone associated elements. size: 602.262573 MiB 00:06:10.238 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:10.238 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:10.238 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:10.238 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:10.238 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:10.238 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2039326_0 00:06:10.238 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:10.238 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2039326_0 00:06:10.238 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:10.238 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2039326_0 00:06:10.238 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:10.238 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:10.238 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:10.238 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:10.238 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:10.238 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2039326 00:06:10.238 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:10.238 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2039326 00:06:10.238 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:10.238 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2039326 00:06:10.239 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:10.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:10.239 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:10.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:10.239 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:10.239 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:10.239 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:10.239 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:10.239 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:10.239 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2039326 00:06:10.239 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:10.239 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2039326 00:06:10.239 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:10.239 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2039326 00:06:10.239 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:10.239 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2039326 00:06:10.239 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:10.239 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2039326 00:06:10.239 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:10.239 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:10.239 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:10.239 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:10.239 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:10.239 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:10.239 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:10.239 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2039326 00:06:10.239 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:10.239 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:10.239 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:10.239 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:10.239 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:10.239 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2039326 00:06:10.239 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:10.239 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:10.239 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:10.239 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2039326 00:06:10.239 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:10.239 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2039326 00:06:10.239 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:10.239 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:10.239 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:10.239 01:40:55 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2039326 00:06:10.239 01:40:55 -- common/autotest_common.sh@926 -- # '[' -z 2039326 ']' 00:06:10.239 01:40:55 -- common/autotest_common.sh@930 -- # kill -0 2039326 00:06:10.239 01:40:55 -- common/autotest_common.sh@931 -- # uname 00:06:10.239 01:40:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:10.239 01:40:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2039326 00:06:10.239 01:40:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:10.239 01:40:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:10.239 01:40:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2039326' 00:06:10.239 killing process with pid 2039326 00:06:10.239 01:40:55 -- common/autotest_common.sh@945 -- # kill 2039326 00:06:10.239 01:40:55 -- common/autotest_common.sh@950 -- # wait 2039326 00:06:10.806 00:06:10.806 real 0m1.557s 00:06:10.806 user 0m1.714s 00:06:10.806 sys 0m0.421s 00:06:10.806 01:40:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.806 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.806 ************************************ 00:06:10.806 END TEST dpdk_mem_utility 00:06:10.806 ************************************ 00:06:10.806 01:40:56 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:10.806 01:40:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:10.806 01:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.806 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.806 ************************************ 00:06:10.806 START TEST event 00:06:10.806 ************************************ 00:06:10.806 01:40:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:10.806 * Looking for test storage... 00:06:10.806 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.806 01:40:56 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:10.806 01:40:56 -- bdev/nbd_common.sh@6 -- # set -e 00:06:10.806 01:40:56 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.806 01:40:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:10.806 01:40:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:10.806 01:40:56 -- common/autotest_common.sh@10 -- # set +x 00:06:10.806 ************************************ 00:06:10.806 START TEST event_perf 00:06:10.806 ************************************ 00:06:10.806 01:40:56 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:10.806 Running I/O for 1 seconds...[2024-04-15 01:40:56.247614] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:10.806 [2024-04-15 01:40:56.247702] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039525 ] 00:06:10.806 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.806 [2024-04-15 01:40:56.307484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:10.806 [2024-04-15 01:40:56.397284] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.806 [2024-04-15 01:40:56.397341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:10.806 [2024-04-15 01:40:56.397406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.806 [2024-04-15 01:40:56.397409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.177 Running I/O for 1 seconds... 00:06:12.177 lcore 0: 227735 00:06:12.177 lcore 1: 227733 00:06:12.177 lcore 2: 227733 00:06:12.177 lcore 3: 227733 00:06:12.177 done. 00:06:12.177 00:06:12.177 real 0m1.248s 00:06:12.177 user 0m4.165s 00:06:12.177 sys 0m0.080s 00:06:12.177 01:40:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.177 01:40:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.177 ************************************ 00:06:12.177 END TEST event_perf 00:06:12.177 ************************************ 00:06:12.177 01:40:57 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.177 01:40:57 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:12.177 01:40:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.177 01:40:57 -- common/autotest_common.sh@10 -- # set +x 00:06:12.177 ************************************ 00:06:12.177 START TEST event_reactor 00:06:12.177 ************************************ 00:06:12.177 01:40:57 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:12.177 [2024-04-15 01:40:57.523056] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:12.177 [2024-04-15 01:40:57.523172] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039688 ] 00:06:12.177 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.177 [2024-04-15 01:40:57.586776] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.177 [2024-04-15 01:40:57.674370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.110 test_start 00:06:13.110 oneshot 00:06:13.110 tick 100 00:06:13.110 tick 100 00:06:13.110 tick 250 00:06:13.110 tick 100 00:06:13.110 tick 100 00:06:13.110 tick 100 00:06:13.110 tick 250 00:06:13.110 tick 500 00:06:13.110 tick 100 00:06:13.110 tick 100 00:06:13.110 tick 250 00:06:13.110 tick 100 00:06:13.110 tick 100 00:06:13.110 test_end 00:06:13.110 00:06:13.110 real 0m1.249s 00:06:13.110 user 0m1.165s 00:06:13.110 sys 0m0.078s 00:06:13.369 01:40:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.369 01:40:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.369 ************************************ 00:06:13.369 END TEST event_reactor 00:06:13.369 ************************************ 00:06:13.369 01:40:58 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.369 01:40:58 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:13.369 01:40:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:13.369 01:40:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.369 ************************************ 00:06:13.369 START TEST event_reactor_perf 00:06:13.369 ************************************ 00:06:13.369 01:40:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:13.369 [2024-04-15 01:40:58.799632] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:13.369 [2024-04-15 01:40:58.799709] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2039854 ] 00:06:13.369 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.369 [2024-04-15 01:40:58.866144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.369 [2024-04-15 01:40:58.956184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.743 test_start 00:06:14.743 test_end 00:06:14.743 Performance: 353552 events per second 00:06:14.743 00:06:14.743 real 0m1.248s 00:06:14.743 user 0m1.154s 00:06:14.743 sys 0m0.089s 00:06:14.743 01:41:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.743 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.743 ************************************ 00:06:14.743 END TEST event_reactor_perf 00:06:14.743 ************************************ 00:06:14.743 01:41:00 -- event/event.sh@49 -- # uname -s 00:06:14.743 01:41:00 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:14.743 01:41:00 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:14.743 01:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:14.743 01:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:14.743 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.743 ************************************ 00:06:14.743 START TEST event_scheduler 00:06:14.743 ************************************ 00:06:14.743 01:41:00 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:14.743 * Looking for test storage... 00:06:14.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2040142 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@37 -- # waitforlisten 2040142 00:06:14.743 01:41:00 -- common/autotest_common.sh@819 -- # '[' -z 2040142 ']' 00:06:14.743 01:41:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:14.743 01:41:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:14.743 01:41:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:14.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:14.743 01:41:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:14.743 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.743 [2024-04-15 01:41:00.155102] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:14.743 [2024-04-15 01:41:00.155175] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040142 ] 00:06:14.743 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.743 [2024-04-15 01:41:00.214870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:14.743 [2024-04-15 01:41:00.302915] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.743 [2024-04-15 01:41:00.302973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.743 [2024-04-15 01:41:00.303039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:14.743 [2024-04-15 01:41:00.303041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:14.743 01:41:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:14.743 01:41:00 -- common/autotest_common.sh@852 -- # return 0 00:06:14.743 01:41:00 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:14.743 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:14.743 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:14.743 POWER: Env isn't set yet! 00:06:14.743 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:14.743 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_available_frequencies 00:06:14.743 POWER: Cannot get available frequencies of lcore 0 00:06:14.743 POWER: Attempting to initialise PSTAT power management... 00:06:14.743 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:14.744 POWER: Initialized successfully for lcore 0 power management 00:06:14.744 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:14.744 POWER: Initialized successfully for lcore 1 power management 00:06:15.002 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:15.002 POWER: Initialized successfully for lcore 2 power management 00:06:15.002 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:15.002 POWER: Initialized successfully for lcore 3 power management 00:06:15.002 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.002 01:41:00 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:15.002 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.002 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.002 [2024-04-15 01:41:00.512188] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:15.002 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:15.003 01:41:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:15.003 01:41:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 ************************************ 00:06:15.003 START TEST scheduler_create_thread 00:06:15.003 ************************************ 00:06:15.003 01:41:00 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 2 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 3 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 4 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 5 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 6 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 7 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 8 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 9 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 10 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.003 01:41:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.003 01:41:00 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:15.003 01:41:00 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.003 01:41:00 -- common/autotest_common.sh@10 -- # set +x 00:06:15.569 01:41:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:15.569 01:41:01 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:15.569 01:41:01 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:15.569 01:41:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:15.569 01:41:01 -- common/autotest_common.sh@10 -- # set +x 00:06:16.942 01:41:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:16.942 00:06:16.942 real 0m1.754s 00:06:16.942 user 0m0.010s 00:06:16.942 sys 0m0.004s 00:06:16.942 01:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.942 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:16.942 ************************************ 00:06:16.942 END TEST scheduler_create_thread 00:06:16.942 ************************************ 00:06:16.942 01:41:02 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:16.942 01:41:02 -- scheduler/scheduler.sh@46 -- # killprocess 2040142 00:06:16.942 01:41:02 -- common/autotest_common.sh@926 -- # '[' -z 2040142 ']' 00:06:16.942 01:41:02 -- common/autotest_common.sh@930 -- # kill -0 2040142 00:06:16.942 01:41:02 -- common/autotest_common.sh@931 -- # uname 00:06:16.942 01:41:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:16.942 01:41:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2040142 00:06:16.942 01:41:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:16.942 01:41:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:16.942 01:41:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2040142' 00:06:16.942 killing process with pid 2040142 00:06:16.942 01:41:02 -- common/autotest_common.sh@945 -- # kill 2040142 00:06:16.942 01:41:02 -- common/autotest_common.sh@950 -- # wait 2040142 00:06:17.201 [2024-04-15 01:41:02.751683] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:17.459 POWER: Power management governor of lcore 0 has been set to 'userspace' successfully 00:06:17.459 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:17.459 POWER: Power management governor of lcore 1 has been set to 'schedutil' successfully 00:06:17.459 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:17.459 POWER: Power management governor of lcore 2 has been set to 'schedutil' successfully 00:06:17.459 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:17.459 POWER: Power management governor of lcore 3 has been set to 'schedutil' successfully 00:06:17.459 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:17.459 00:06:17.459 real 0m2.890s 00:06:17.459 user 0m3.814s 00:06:17.459 sys 0m0.297s 00:06:17.459 01:41:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.459 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:17.459 ************************************ 00:06:17.459 END TEST event_scheduler 00:06:17.459 ************************************ 00:06:17.459 01:41:02 -- event/event.sh@51 -- # modprobe -n nbd 00:06:17.459 01:41:02 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:17.459 01:41:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:17.459 01:41:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:17.459 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:17.459 ************************************ 00:06:17.459 START TEST app_repeat 00:06:17.459 ************************************ 00:06:17.459 01:41:02 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:17.459 01:41:02 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.459 01:41:02 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.459 01:41:02 -- event/event.sh@13 -- # local nbd_list 00:06:17.459 01:41:02 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.459 01:41:02 -- event/event.sh@14 -- # local bdev_list 00:06:17.459 01:41:02 -- event/event.sh@15 -- # local repeat_times=4 00:06:17.459 01:41:02 -- event/event.sh@17 -- # modprobe nbd 00:06:17.459 01:41:02 -- event/event.sh@19 -- # repeat_pid=2040479 00:06:17.459 01:41:02 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:17.459 01:41:02 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:17.459 01:41:02 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2040479' 00:06:17.459 Process app_repeat pid: 2040479 00:06:17.459 01:41:02 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.459 01:41:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:17.459 spdk_app_start Round 0 00:06:17.459 01:41:02 -- event/event.sh@25 -- # waitforlisten 2040479 /var/tmp/spdk-nbd.sock 00:06:17.459 01:41:02 -- common/autotest_common.sh@819 -- # '[' -z 2040479 ']' 00:06:17.459 01:41:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.459 01:41:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.459 01:41:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.459 01:41:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.459 01:41:02 -- common/autotest_common.sh@10 -- # set +x 00:06:17.459 [2024-04-15 01:41:03.007271] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:17.459 [2024-04-15 01:41:03.007363] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2040479 ] 00:06:17.459 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.459 [2024-04-15 01:41:03.069768] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:17.718 [2024-04-15 01:41:03.157234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.718 [2024-04-15 01:41:03.157240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.652 01:41:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.652 01:41:03 -- common/autotest_common.sh@852 -- # return 0 00:06:18.652 01:41:03 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.652 Malloc0 00:06:18.652 01:41:04 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.911 Malloc1 00:06:18.911 01:41:04 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.911 01:41:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.169 /dev/nbd0 00:06:19.169 01:41:04 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.169 01:41:04 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.169 01:41:04 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:19.169 01:41:04 -- common/autotest_common.sh@857 -- # local i 00:06:19.169 01:41:04 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.169 01:41:04 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.169 01:41:04 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:19.169 01:41:04 -- common/autotest_common.sh@861 -- # break 00:06:19.169 01:41:04 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.169 01:41:04 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.169 01:41:04 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.169 1+0 records in 00:06:19.169 1+0 records out 00:06:19.169 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146409 s, 28.0 MB/s 00:06:19.169 01:41:04 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.169 01:41:04 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.169 01:41:04 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.169 01:41:04 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.169 01:41:04 -- common/autotest_common.sh@877 -- # return 0 00:06:19.169 01:41:04 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.169 01:41:04 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.169 01:41:04 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.427 /dev/nbd1 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.427 01:41:05 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.427 01:41:05 -- common/autotest_common.sh@857 -- # local i 00:06:19.427 01:41:05 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.427 01:41:05 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.427 01:41:05 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.427 01:41:05 -- common/autotest_common.sh@861 -- # break 00:06:19.427 01:41:05 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.427 01:41:05 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.427 01:41:05 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.427 1+0 records in 00:06:19.427 1+0 records out 00:06:19.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215312 s, 19.0 MB/s 00:06:19.427 01:41:05 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.427 01:41:05 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.427 01:41:05 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.427 01:41:05 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.427 01:41:05 -- common/autotest_common.sh@877 -- # return 0 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.427 01:41:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.685 01:41:05 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.685 { 00:06:19.685 "nbd_device": "/dev/nbd0", 00:06:19.685 "bdev_name": "Malloc0" 00:06:19.685 }, 00:06:19.685 { 00:06:19.685 "nbd_device": "/dev/nbd1", 00:06:19.685 "bdev_name": "Malloc1" 00:06:19.685 } 00:06:19.685 ]' 00:06:19.685 01:41:05 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.685 { 00:06:19.686 "nbd_device": "/dev/nbd0", 00:06:19.686 "bdev_name": "Malloc0" 00:06:19.686 }, 00:06:19.686 { 00:06:19.686 "nbd_device": "/dev/nbd1", 00:06:19.686 "bdev_name": "Malloc1" 00:06:19.686 } 00:06:19.686 ]' 00:06:19.686 01:41:05 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.949 01:41:05 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.949 /dev/nbd1' 00:06:19.949 01:41:05 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.950 /dev/nbd1' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.950 256+0 records in 00:06:19.950 256+0 records out 00:06:19.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502349 s, 209 MB/s 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.950 256+0 records in 00:06:19.950 256+0 records out 00:06:19.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.023267 s, 45.1 MB/s 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.950 256+0 records in 00:06:19.950 256+0 records out 00:06:19.950 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249683 s, 42.0 MB/s 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.950 01:41:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@41 -- # break 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.266 01:41:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@41 -- # break 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.525 01:41:05 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@65 -- # true 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.783 01:41:06 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.783 01:41:06 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.041 01:41:06 -- event/event.sh@35 -- # sleep 3 00:06:21.299 [2024-04-15 01:41:06.738915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.300 [2024-04-15 01:41:06.831518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.300 [2024-04-15 01:41:06.831518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.300 [2024-04-15 01:41:06.892689] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.300 [2024-04-15 01:41:06.892763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.576 01:41:09 -- event/event.sh@23 -- # for i in {0..2} 00:06:24.576 01:41:09 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:24.576 spdk_app_start Round 1 00:06:24.576 01:41:09 -- event/event.sh@25 -- # waitforlisten 2040479 /var/tmp/spdk-nbd.sock 00:06:24.576 01:41:09 -- common/autotest_common.sh@819 -- # '[' -z 2040479 ']' 00:06:24.576 01:41:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.576 01:41:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.576 01:41:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.576 01:41:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.576 01:41:09 -- common/autotest_common.sh@10 -- # set +x 00:06:24.576 01:41:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.576 01:41:09 -- common/autotest_common.sh@852 -- # return 0 00:06:24.576 01:41:09 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.576 Malloc0 00:06:24.576 01:41:10 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.834 Malloc1 00:06:24.834 01:41:10 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@12 -- # local i 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.834 01:41:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.091 /dev/nbd0 00:06:25.091 01:41:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.091 01:41:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.091 01:41:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:25.091 01:41:10 -- common/autotest_common.sh@857 -- # local i 00:06:25.091 01:41:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:25.091 01:41:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:25.091 01:41:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:25.091 01:41:10 -- common/autotest_common.sh@861 -- # break 00:06:25.091 01:41:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:25.091 01:41:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:25.091 01:41:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.091 1+0 records in 00:06:25.091 1+0 records out 00:06:25.091 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00017112 s, 23.9 MB/s 00:06:25.091 01:41:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.091 01:41:10 -- common/autotest_common.sh@874 -- # size=4096 00:06:25.091 01:41:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.091 01:41:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:25.091 01:41:10 -- common/autotest_common.sh@877 -- # return 0 00:06:25.091 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.091 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.091 01:41:10 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.349 /dev/nbd1 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.349 01:41:10 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:25.349 01:41:10 -- common/autotest_common.sh@857 -- # local i 00:06:25.349 01:41:10 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:25.349 01:41:10 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:25.349 01:41:10 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:25.349 01:41:10 -- common/autotest_common.sh@861 -- # break 00:06:25.349 01:41:10 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:25.349 01:41:10 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:25.349 01:41:10 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.349 1+0 records in 00:06:25.349 1+0 records out 00:06:25.349 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209195 s, 19.6 MB/s 00:06:25.349 01:41:10 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.349 01:41:10 -- common/autotest_common.sh@874 -- # size=4096 00:06:25.349 01:41:10 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.349 01:41:10 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:25.349 01:41:10 -- common/autotest_common.sh@877 -- # return 0 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.349 01:41:10 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.607 { 00:06:25.607 "nbd_device": "/dev/nbd0", 00:06:25.607 "bdev_name": "Malloc0" 00:06:25.607 }, 00:06:25.607 { 00:06:25.607 "nbd_device": "/dev/nbd1", 00:06:25.607 "bdev_name": "Malloc1" 00:06:25.607 } 00:06:25.607 ]' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.607 { 00:06:25.607 "nbd_device": "/dev/nbd0", 00:06:25.607 "bdev_name": "Malloc0" 00:06:25.607 }, 00:06:25.607 { 00:06:25.607 "nbd_device": "/dev/nbd1", 00:06:25.607 "bdev_name": "Malloc1" 00:06:25.607 } 00:06:25.607 ]' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.607 /dev/nbd1' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.607 /dev/nbd1' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.607 256+0 records in 00:06:25.607 256+0 records out 00:06:25.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00489546 s, 214 MB/s 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.607 256+0 records in 00:06:25.607 256+0 records out 00:06:25.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195245 s, 53.7 MB/s 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.607 256+0 records in 00:06:25.607 256+0 records out 00:06:25.607 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242562 s, 43.2 MB/s 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@51 -- # local i 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.607 01:41:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@41 -- # break 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.865 01:41:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@41 -- # break 00:06:26.122 01:41:11 -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.123 01:41:11 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.123 01:41:11 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.123 01:41:11 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.380 01:41:11 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.380 01:41:11 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.380 01:41:11 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@65 -- # true 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.380 01:41:12 -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.380 01:41:12 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.638 01:41:12 -- event/event.sh@35 -- # sleep 3 00:06:26.896 [2024-04-15 01:41:12.502856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.156 [2024-04-15 01:41:12.593464] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.156 [2024-04-15 01:41:12.593468] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.156 [2024-04-15 01:41:12.654853] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.156 [2024-04-15 01:41:12.654929] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.685 01:41:15 -- event/event.sh@23 -- # for i in {0..2} 00:06:29.685 01:41:15 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.685 spdk_app_start Round 2 00:06:29.685 01:41:15 -- event/event.sh@25 -- # waitforlisten 2040479 /var/tmp/spdk-nbd.sock 00:06:29.685 01:41:15 -- common/autotest_common.sh@819 -- # '[' -z 2040479 ']' 00:06:29.685 01:41:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.685 01:41:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.685 01:41:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.685 01:41:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.685 01:41:15 -- common/autotest_common.sh@10 -- # set +x 00:06:29.942 01:41:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.942 01:41:15 -- common/autotest_common.sh@852 -- # return 0 00:06:29.942 01:41:15 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.201 Malloc0 00:06:30.201 01:41:15 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.459 Malloc1 00:06:30.459 01:41:16 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@12 -- # local i 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.459 01:41:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.718 /dev/nbd0 00:06:30.718 01:41:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.718 01:41:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.718 01:41:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:30.718 01:41:16 -- common/autotest_common.sh@857 -- # local i 00:06:30.718 01:41:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:30.718 01:41:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:30.718 01:41:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:30.718 01:41:16 -- common/autotest_common.sh@861 -- # break 00:06:30.718 01:41:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:30.718 01:41:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:30.718 01:41:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.718 1+0 records in 00:06:30.718 1+0 records out 00:06:30.718 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155757 s, 26.3 MB/s 00:06:30.718 01:41:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.718 01:41:16 -- common/autotest_common.sh@874 -- # size=4096 00:06:30.718 01:41:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.718 01:41:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:30.718 01:41:16 -- common/autotest_common.sh@877 -- # return 0 00:06:30.718 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.718 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.718 01:41:16 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:30.976 /dev/nbd1 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:30.976 01:41:16 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:30.976 01:41:16 -- common/autotest_common.sh@857 -- # local i 00:06:30.976 01:41:16 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:30.976 01:41:16 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:30.976 01:41:16 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:30.976 01:41:16 -- common/autotest_common.sh@861 -- # break 00:06:30.976 01:41:16 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:30.976 01:41:16 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:30.976 01:41:16 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.976 1+0 records in 00:06:30.976 1+0 records out 00:06:30.976 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216429 s, 18.9 MB/s 00:06:30.976 01:41:16 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.976 01:41:16 -- common/autotest_common.sh@874 -- # size=4096 00:06:30.976 01:41:16 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.976 01:41:16 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:30.976 01:41:16 -- common/autotest_common.sh@877 -- # return 0 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.976 01:41:16 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.235 { 00:06:31.235 "nbd_device": "/dev/nbd0", 00:06:31.235 "bdev_name": "Malloc0" 00:06:31.235 }, 00:06:31.235 { 00:06:31.235 "nbd_device": "/dev/nbd1", 00:06:31.235 "bdev_name": "Malloc1" 00:06:31.235 } 00:06:31.235 ]' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.235 { 00:06:31.235 "nbd_device": "/dev/nbd0", 00:06:31.235 "bdev_name": "Malloc0" 00:06:31.235 }, 00:06:31.235 { 00:06:31.235 "nbd_device": "/dev/nbd1", 00:06:31.235 "bdev_name": "Malloc1" 00:06:31.235 } 00:06:31.235 ]' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.235 /dev/nbd1' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.235 /dev/nbd1' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.235 256+0 records in 00:06:31.235 256+0 records out 00:06:31.235 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00476125 s, 220 MB/s 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.235 01:41:16 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.493 256+0 records in 00:06:31.493 256+0 records out 00:06:31.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0239947 s, 43.7 MB/s 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.493 256+0 records in 00:06:31.493 256+0 records out 00:06:31.493 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024426 s, 42.9 MB/s 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@51 -- # local i 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.493 01:41:16 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@41 -- # break 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.751 01:41:17 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@41 -- # break 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.010 01:41:17 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@65 -- # true 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.268 01:41:17 -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.268 01:41:17 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.526 01:41:17 -- event/event.sh@35 -- # sleep 3 00:06:32.785 [2024-04-15 01:41:18.206669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.785 [2024-04-15 01:41:18.294122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.785 [2024-04-15 01:41:18.294127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.785 [2024-04-15 01:41:18.355860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.785 [2024-04-15 01:41:18.355953] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.109 01:41:20 -- event/event.sh@38 -- # waitforlisten 2040479 /var/tmp/spdk-nbd.sock 00:06:36.109 01:41:20 -- common/autotest_common.sh@819 -- # '[' -z 2040479 ']' 00:06:36.109 01:41:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.109 01:41:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.109 01:41:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.109 01:41:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.109 01:41:20 -- common/autotest_common.sh@10 -- # set +x 00:06:36.109 01:41:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.109 01:41:21 -- common/autotest_common.sh@852 -- # return 0 00:06:36.109 01:41:21 -- event/event.sh@39 -- # killprocess 2040479 00:06:36.109 01:41:21 -- common/autotest_common.sh@926 -- # '[' -z 2040479 ']' 00:06:36.109 01:41:21 -- common/autotest_common.sh@930 -- # kill -0 2040479 00:06:36.109 01:41:21 -- common/autotest_common.sh@931 -- # uname 00:06:36.109 01:41:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.109 01:41:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2040479 00:06:36.109 01:41:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.109 01:41:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.109 01:41:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2040479' 00:06:36.109 killing process with pid 2040479 00:06:36.109 01:41:21 -- common/autotest_common.sh@945 -- # kill 2040479 00:06:36.109 01:41:21 -- common/autotest_common.sh@950 -- # wait 2040479 00:06:36.109 spdk_app_start is called in Round 0. 00:06:36.109 Shutdown signal received, stop current app iteration 00:06:36.109 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:36.109 spdk_app_start is called in Round 1. 00:06:36.109 Shutdown signal received, stop current app iteration 00:06:36.109 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:36.109 spdk_app_start is called in Round 2. 00:06:36.109 Shutdown signal received, stop current app iteration 00:06:36.109 Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 reinitialization... 00:06:36.110 spdk_app_start is called in Round 3. 00:06:36.110 Shutdown signal received, stop current app iteration 00:06:36.110 01:41:21 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.110 01:41:21 -- event/event.sh@42 -- # return 0 00:06:36.110 00:06:36.110 real 0m18.470s 00:06:36.110 user 0m40.155s 00:06:36.110 sys 0m3.208s 00:06:36.110 01:41:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.110 01:41:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 ************************************ 00:06:36.110 END TEST app_repeat 00:06:36.110 ************************************ 00:06:36.110 01:41:21 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.110 01:41:21 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.110 01:41:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.110 01:41:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.110 01:41:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 ************************************ 00:06:36.110 START TEST cpu_locks 00:06:36.110 ************************************ 00:06:36.110 01:41:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.110 * Looking for test storage... 00:06:36.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.110 01:41:21 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.110 01:41:21 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.110 01:41:21 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.110 01:41:21 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.110 01:41:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:36.110 01:41:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:36.110 01:41:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 ************************************ 00:06:36.110 START TEST default_locks 00:06:36.110 ************************************ 00:06:36.110 01:41:21 -- common/autotest_common.sh@1104 -- # default_locks 00:06:36.110 01:41:21 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2043012 00:06:36.110 01:41:21 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.110 01:41:21 -- event/cpu_locks.sh@47 -- # waitforlisten 2043012 00:06:36.110 01:41:21 -- common/autotest_common.sh@819 -- # '[' -z 2043012 ']' 00:06:36.110 01:41:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.110 01:41:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.110 01:41:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.110 01:41:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.110 01:41:21 -- common/autotest_common.sh@10 -- # set +x 00:06:36.110 [2024-04-15 01:41:21.583542] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:36.110 [2024-04-15 01:41:21.583633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043012 ] 00:06:36.110 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.110 [2024-04-15 01:41:21.641311] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.110 [2024-04-15 01:41:21.728162] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:36.110 [2024-04-15 01:41:21.728323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.053 01:41:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.053 01:41:22 -- common/autotest_common.sh@852 -- # return 0 00:06:37.053 01:41:22 -- event/cpu_locks.sh@49 -- # locks_exist 2043012 00:06:37.053 01:41:22 -- event/cpu_locks.sh@22 -- # lslocks -p 2043012 00:06:37.053 01:41:22 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:37.311 lslocks: write error 00:06:37.311 01:41:22 -- event/cpu_locks.sh@50 -- # killprocess 2043012 00:06:37.311 01:41:22 -- common/autotest_common.sh@926 -- # '[' -z 2043012 ']' 00:06:37.311 01:41:22 -- common/autotest_common.sh@930 -- # kill -0 2043012 00:06:37.311 01:41:22 -- common/autotest_common.sh@931 -- # uname 00:06:37.311 01:41:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.311 01:41:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043012 00:06:37.311 01:41:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.311 01:41:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.311 01:41:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043012' 00:06:37.311 killing process with pid 2043012 00:06:37.311 01:41:22 -- common/autotest_common.sh@945 -- # kill 2043012 00:06:37.311 01:41:22 -- common/autotest_common.sh@950 -- # wait 2043012 00:06:37.877 01:41:23 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2043012 00:06:37.877 01:41:23 -- common/autotest_common.sh@640 -- # local es=0 00:06:37.877 01:41:23 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2043012 00:06:37.877 01:41:23 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:37.877 01:41:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.877 01:41:23 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:37.877 01:41:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:37.877 01:41:23 -- common/autotest_common.sh@643 -- # waitforlisten 2043012 00:06:37.877 01:41:23 -- common/autotest_common.sh@819 -- # '[' -z 2043012 ']' 00:06:37.877 01:41:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.877 01:41:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.877 01:41:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.877 01:41:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.877 01:41:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.877 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2043012) - No such process 00:06:37.877 ERROR: process (pid: 2043012) is no longer running 00:06:37.877 01:41:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.877 01:41:23 -- common/autotest_common.sh@852 -- # return 1 00:06:37.877 01:41:23 -- common/autotest_common.sh@643 -- # es=1 00:06:37.877 01:41:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.877 01:41:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.877 01:41:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.877 01:41:23 -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.877 01:41:23 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.877 01:41:23 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.877 01:41:23 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.877 00:06:37.877 real 0m1.791s 00:06:37.877 user 0m1.908s 00:06:37.877 sys 0m0.555s 00:06:37.877 01:41:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.877 01:41:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.877 ************************************ 00:06:37.877 END TEST default_locks 00:06:37.877 ************************************ 00:06:37.877 01:41:23 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.877 01:41:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.877 01:41:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.877 01:41:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.877 ************************************ 00:06:37.877 START TEST default_locks_via_rpc 00:06:37.877 ************************************ 00:06:37.877 01:41:23 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:37.877 01:41:23 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2043192 00:06:37.877 01:41:23 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.877 01:41:23 -- event/cpu_locks.sh@63 -- # waitforlisten 2043192 00:06:37.877 01:41:23 -- common/autotest_common.sh@819 -- # '[' -z 2043192 ']' 00:06:37.877 01:41:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.877 01:41:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.877 01:41:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.877 01:41:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.877 01:41:23 -- common/autotest_common.sh@10 -- # set +x 00:06:37.877 [2024-04-15 01:41:23.404164] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:37.877 [2024-04-15 01:41:23.404241] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043192 ] 00:06:37.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.877 [2024-04-15 01:41:23.472693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.136 [2024-04-15 01:41:23.561963] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.136 [2024-04-15 01:41:23.562154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.071 01:41:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.071 01:41:24 -- common/autotest_common.sh@852 -- # return 0 00:06:39.071 01:41:24 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.071 01:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.071 01:41:24 -- common/autotest_common.sh@10 -- # set +x 00:06:39.071 01:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.071 01:41:24 -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.071 01:41:24 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.071 01:41:24 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.071 01:41:24 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.071 01:41:24 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.071 01:41:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.071 01:41:24 -- common/autotest_common.sh@10 -- # set +x 00:06:39.071 01:41:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.071 01:41:24 -- event/cpu_locks.sh@71 -- # locks_exist 2043192 00:06:39.071 01:41:24 -- event/cpu_locks.sh@22 -- # lslocks -p 2043192 00:06:39.071 01:41:24 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.329 01:41:24 -- event/cpu_locks.sh@73 -- # killprocess 2043192 00:06:39.329 01:41:24 -- common/autotest_common.sh@926 -- # '[' -z 2043192 ']' 00:06:39.329 01:41:24 -- common/autotest_common.sh@930 -- # kill -0 2043192 00:06:39.329 01:41:24 -- common/autotest_common.sh@931 -- # uname 00:06:39.329 01:41:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.329 01:41:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043192 00:06:39.329 01:41:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.329 01:41:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.329 01:41:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043192' 00:06:39.329 killing process with pid 2043192 00:06:39.329 01:41:24 -- common/autotest_common.sh@945 -- # kill 2043192 00:06:39.329 01:41:24 -- common/autotest_common.sh@950 -- # wait 2043192 00:06:39.588 00:06:39.588 real 0m1.801s 00:06:39.588 user 0m1.951s 00:06:39.588 sys 0m0.576s 00:06:39.588 01:41:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.588 01:41:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.588 ************************************ 00:06:39.588 END TEST default_locks_via_rpc 00:06:39.588 ************************************ 00:06:39.588 01:41:25 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:39.588 01:41:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:39.588 01:41:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:39.588 01:41:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.588 ************************************ 00:06:39.588 START TEST non_locking_app_on_locked_coremask 00:06:39.588 ************************************ 00:06:39.588 01:41:25 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:39.588 01:41:25 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2043492 00:06:39.588 01:41:25 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.588 01:41:25 -- event/cpu_locks.sh@81 -- # waitforlisten 2043492 /var/tmp/spdk.sock 00:06:39.588 01:41:25 -- common/autotest_common.sh@819 -- # '[' -z 2043492 ']' 00:06:39.588 01:41:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.588 01:41:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.588 01:41:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.588 01:41:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.588 01:41:25 -- common/autotest_common.sh@10 -- # set +x 00:06:39.588 [2024-04-15 01:41:25.227404] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:39.588 [2024-04-15 01:41:25.227478] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043492 ] 00:06:39.848 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.848 [2024-04-15 01:41:25.287547] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.848 [2024-04-15 01:41:25.373447] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:39.848 [2024-04-15 01:41:25.373602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.783 01:41:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.783 01:41:26 -- common/autotest_common.sh@852 -- # return 0 00:06:40.783 01:41:26 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2043628 00:06:40.783 01:41:26 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:40.783 01:41:26 -- event/cpu_locks.sh@85 -- # waitforlisten 2043628 /var/tmp/spdk2.sock 00:06:40.783 01:41:26 -- common/autotest_common.sh@819 -- # '[' -z 2043628 ']' 00:06:40.783 01:41:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.783 01:41:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.783 01:41:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.783 01:41:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.783 01:41:26 -- common/autotest_common.sh@10 -- # set +x 00:06:40.783 [2024-04-15 01:41:26.250986] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:40.783 [2024-04-15 01:41:26.251080] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043628 ] 00:06:40.783 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.783 [2024-04-15 01:41:26.342919] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:40.783 [2024-04-15 01:41:26.342953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.042 [2024-04-15 01:41:26.524199] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.042 [2024-04-15 01:41:26.524375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.608 01:41:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.608 01:41:27 -- common/autotest_common.sh@852 -- # return 0 00:06:41.608 01:41:27 -- event/cpu_locks.sh@87 -- # locks_exist 2043492 00:06:41.608 01:41:27 -- event/cpu_locks.sh@22 -- # lslocks -p 2043492 00:06:41.608 01:41:27 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:42.176 lslocks: write error 00:06:42.176 01:41:27 -- event/cpu_locks.sh@89 -- # killprocess 2043492 00:06:42.176 01:41:27 -- common/autotest_common.sh@926 -- # '[' -z 2043492 ']' 00:06:42.176 01:41:27 -- common/autotest_common.sh@930 -- # kill -0 2043492 00:06:42.176 01:41:27 -- common/autotest_common.sh@931 -- # uname 00:06:42.176 01:41:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.176 01:41:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043492 00:06:42.176 01:41:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:42.176 01:41:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:42.176 01:41:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043492' 00:06:42.176 killing process with pid 2043492 00:06:42.176 01:41:27 -- common/autotest_common.sh@945 -- # kill 2043492 00:06:42.176 01:41:27 -- common/autotest_common.sh@950 -- # wait 2043492 00:06:43.110 01:41:28 -- event/cpu_locks.sh@90 -- # killprocess 2043628 00:06:43.110 01:41:28 -- common/autotest_common.sh@926 -- # '[' -z 2043628 ']' 00:06:43.110 01:41:28 -- common/autotest_common.sh@930 -- # kill -0 2043628 00:06:43.110 01:41:28 -- common/autotest_common.sh@931 -- # uname 00:06:43.110 01:41:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:43.110 01:41:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043628 00:06:43.110 01:41:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:43.110 01:41:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:43.110 01:41:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043628' 00:06:43.110 killing process with pid 2043628 00:06:43.110 01:41:28 -- common/autotest_common.sh@945 -- # kill 2043628 00:06:43.110 01:41:28 -- common/autotest_common.sh@950 -- # wait 2043628 00:06:43.675 00:06:43.675 real 0m3.859s 00:06:43.675 user 0m4.212s 00:06:43.675 sys 0m1.054s 00:06:43.675 01:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.675 01:41:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.675 ************************************ 00:06:43.675 END TEST non_locking_app_on_locked_coremask 00:06:43.675 ************************************ 00:06:43.675 01:41:29 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:43.675 01:41:29 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.675 01:41:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.675 01:41:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.676 ************************************ 00:06:43.676 START TEST locking_app_on_unlocked_coremask 00:06:43.676 ************************************ 00:06:43.676 01:41:29 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:43.676 01:41:29 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2043945 00:06:43.676 01:41:29 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:43.676 01:41:29 -- event/cpu_locks.sh@99 -- # waitforlisten 2043945 /var/tmp/spdk.sock 00:06:43.676 01:41:29 -- common/autotest_common.sh@819 -- # '[' -z 2043945 ']' 00:06:43.676 01:41:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.676 01:41:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.676 01:41:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.676 01:41:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.676 01:41:29 -- common/autotest_common.sh@10 -- # set +x 00:06:43.676 [2024-04-15 01:41:29.119310] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:43.676 [2024-04-15 01:41:29.119408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2043945 ] 00:06:43.676 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.676 [2024-04-15 01:41:29.178108] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.676 [2024-04-15 01:41:29.178144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.676 [2024-04-15 01:41:29.267467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.676 [2024-04-15 01:41:29.267657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.609 01:41:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.609 01:41:30 -- common/autotest_common.sh@852 -- # return 0 00:06:44.609 01:41:30 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2044085 00:06:44.609 01:41:30 -- event/cpu_locks.sh@103 -- # waitforlisten 2044085 /var/tmp/spdk2.sock 00:06:44.609 01:41:30 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:44.609 01:41:30 -- common/autotest_common.sh@819 -- # '[' -z 2044085 ']' 00:06:44.609 01:41:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.609 01:41:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.609 01:41:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.609 01:41:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.609 01:41:30 -- common/autotest_common.sh@10 -- # set +x 00:06:44.609 [2024-04-15 01:41:30.097077] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:44.609 [2024-04-15 01:41:30.097200] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044085 ] 00:06:44.609 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.609 [2024-04-15 01:41:30.197935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.868 [2024-04-15 01:41:30.380683] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.868 [2024-04-15 01:41:30.380857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.434 01:41:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.434 01:41:31 -- common/autotest_common.sh@852 -- # return 0 00:06:45.434 01:41:31 -- event/cpu_locks.sh@105 -- # locks_exist 2044085 00:06:45.434 01:41:31 -- event/cpu_locks.sh@22 -- # lslocks -p 2044085 00:06:45.434 01:41:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.998 lslocks: write error 00:06:45.998 01:41:31 -- event/cpu_locks.sh@107 -- # killprocess 2043945 00:06:45.998 01:41:31 -- common/autotest_common.sh@926 -- # '[' -z 2043945 ']' 00:06:45.998 01:41:31 -- common/autotest_common.sh@930 -- # kill -0 2043945 00:06:45.998 01:41:31 -- common/autotest_common.sh@931 -- # uname 00:06:45.998 01:41:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.998 01:41:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2043945 00:06:45.998 01:41:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.998 01:41:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.998 01:41:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2043945' 00:06:45.998 killing process with pid 2043945 00:06:45.998 01:41:31 -- common/autotest_common.sh@945 -- # kill 2043945 00:06:45.998 01:41:31 -- common/autotest_common.sh@950 -- # wait 2043945 00:06:46.930 01:41:32 -- event/cpu_locks.sh@108 -- # killprocess 2044085 00:06:46.930 01:41:32 -- common/autotest_common.sh@926 -- # '[' -z 2044085 ']' 00:06:46.930 01:41:32 -- common/autotest_common.sh@930 -- # kill -0 2044085 00:06:46.930 01:41:32 -- common/autotest_common.sh@931 -- # uname 00:06:46.930 01:41:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.930 01:41:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044085 00:06:46.930 01:41:32 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:46.930 01:41:32 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:46.930 01:41:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044085' 00:06:46.930 killing process with pid 2044085 00:06:46.930 01:41:32 -- common/autotest_common.sh@945 -- # kill 2044085 00:06:46.930 01:41:32 -- common/autotest_common.sh@950 -- # wait 2044085 00:06:47.188 00:06:47.188 real 0m3.644s 00:06:47.188 user 0m3.921s 00:06:47.188 sys 0m1.107s 00:06:47.188 01:41:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.188 01:41:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.188 ************************************ 00:06:47.188 END TEST locking_app_on_unlocked_coremask 00:06:47.189 ************************************ 00:06:47.189 01:41:32 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:47.189 01:41:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:47.189 01:41:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.189 01:41:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.189 ************************************ 00:06:47.189 START TEST locking_app_on_locked_coremask 00:06:47.189 ************************************ 00:06:47.189 01:41:32 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:47.189 01:41:32 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2044496 00:06:47.189 01:41:32 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:47.189 01:41:32 -- event/cpu_locks.sh@116 -- # waitforlisten 2044496 /var/tmp/spdk.sock 00:06:47.189 01:41:32 -- common/autotest_common.sh@819 -- # '[' -z 2044496 ']' 00:06:47.189 01:41:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.189 01:41:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:47.189 01:41:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.189 01:41:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:47.189 01:41:32 -- common/autotest_common.sh@10 -- # set +x 00:06:47.189 [2024-04-15 01:41:32.789428] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:47.189 [2024-04-15 01:41:32.789505] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044496 ] 00:06:47.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.447 [2024-04-15 01:41:32.849178] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.447 [2024-04-15 01:41:32.935431] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:47.447 [2024-04-15 01:41:32.935595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.378 01:41:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.378 01:41:33 -- common/autotest_common.sh@852 -- # return 0 00:06:48.378 01:41:33 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2044533 00:06:48.378 01:41:33 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2044533 /var/tmp/spdk2.sock 00:06:48.378 01:41:33 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:48.378 01:41:33 -- common/autotest_common.sh@640 -- # local es=0 00:06:48.378 01:41:33 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2044533 /var/tmp/spdk2.sock 00:06:48.378 01:41:33 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:48.378 01:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.378 01:41:33 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:48.378 01:41:33 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:48.378 01:41:33 -- common/autotest_common.sh@643 -- # waitforlisten 2044533 /var/tmp/spdk2.sock 00:06:48.378 01:41:33 -- common/autotest_common.sh@819 -- # '[' -z 2044533 ']' 00:06:48.378 01:41:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:48.378 01:41:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:48.378 01:41:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:48.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:48.378 01:41:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:48.378 01:41:33 -- common/autotest_common.sh@10 -- # set +x 00:06:48.378 [2024-04-15 01:41:33.778114] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:48.378 [2024-04-15 01:41:33.778193] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044533 ] 00:06:48.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.378 [2024-04-15 01:41:33.877555] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2044496 has claimed it. 00:06:48.378 [2024-04-15 01:41:33.877612] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.943 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2044533) - No such process 00:06:48.943 ERROR: process (pid: 2044533) is no longer running 00:06:48.943 01:41:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:48.943 01:41:34 -- common/autotest_common.sh@852 -- # return 1 00:06:48.943 01:41:34 -- common/autotest_common.sh@643 -- # es=1 00:06:48.943 01:41:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:48.943 01:41:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:48.943 01:41:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:48.943 01:41:34 -- event/cpu_locks.sh@122 -- # locks_exist 2044496 00:06:48.943 01:41:34 -- event/cpu_locks.sh@22 -- # lslocks -p 2044496 00:06:48.943 01:41:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.509 lslocks: write error 00:06:49.509 01:41:34 -- event/cpu_locks.sh@124 -- # killprocess 2044496 00:06:49.509 01:41:34 -- common/autotest_common.sh@926 -- # '[' -z 2044496 ']' 00:06:49.509 01:41:34 -- common/autotest_common.sh@930 -- # kill -0 2044496 00:06:49.509 01:41:34 -- common/autotest_common.sh@931 -- # uname 00:06:49.509 01:41:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:49.509 01:41:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044496 00:06:49.509 01:41:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:49.509 01:41:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:49.509 01:41:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044496' 00:06:49.509 killing process with pid 2044496 00:06:49.509 01:41:34 -- common/autotest_common.sh@945 -- # kill 2044496 00:06:49.509 01:41:34 -- common/autotest_common.sh@950 -- # wait 2044496 00:06:49.767 00:06:49.767 real 0m2.552s 00:06:49.767 user 0m2.892s 00:06:49.767 sys 0m0.709s 00:06:49.767 01:41:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.767 01:41:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.767 ************************************ 00:06:49.767 END TEST locking_app_on_locked_coremask 00:06:49.767 ************************************ 00:06:49.767 01:41:35 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:49.767 01:41:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.767 01:41:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.767 01:41:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.767 ************************************ 00:06:49.767 START TEST locking_overlapped_coremask 00:06:49.767 ************************************ 00:06:49.767 01:41:35 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:49.767 01:41:35 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2044831 00:06:49.767 01:41:35 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:49.767 01:41:35 -- event/cpu_locks.sh@133 -- # waitforlisten 2044831 /var/tmp/spdk.sock 00:06:49.767 01:41:35 -- common/autotest_common.sh@819 -- # '[' -z 2044831 ']' 00:06:49.767 01:41:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.767 01:41:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.767 01:41:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.767 01:41:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.767 01:41:35 -- common/autotest_common.sh@10 -- # set +x 00:06:49.767 [2024-04-15 01:41:35.364519] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:49.767 [2024-04-15 01:41:35.364597] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044831 ] 00:06:49.767 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.026 [2024-04-15 01:41:35.427754] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.026 [2024-04-15 01:41:35.523405] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:50.026 [2024-04-15 01:41:35.525069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.026 [2024-04-15 01:41:35.525124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.026 [2024-04-15 01:41:35.525143] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.960 01:41:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.960 01:41:36 -- common/autotest_common.sh@852 -- # return 0 00:06:50.960 01:41:36 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2044969 00:06:50.960 01:41:36 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2044969 /var/tmp/spdk2.sock 00:06:50.960 01:41:36 -- common/autotest_common.sh@640 -- # local es=0 00:06:50.960 01:41:36 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 2044969 /var/tmp/spdk2.sock 00:06:50.960 01:41:36 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:50.960 01:41:36 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:50.960 01:41:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:50.960 01:41:36 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:50.960 01:41:36 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:50.960 01:41:36 -- common/autotest_common.sh@643 -- # waitforlisten 2044969 /var/tmp/spdk2.sock 00:06:50.960 01:41:36 -- common/autotest_common.sh@819 -- # '[' -z 2044969 ']' 00:06:50.960 01:41:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.960 01:41:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:50.960 01:41:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.960 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.960 01:41:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:50.960 01:41:36 -- common/autotest_common.sh@10 -- # set +x 00:06:50.960 [2024-04-15 01:41:36.376271] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:50.960 [2024-04-15 01:41:36.376364] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2044969 ] 00:06:50.960 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.960 [2024-04-15 01:41:36.468617] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2044831 has claimed it. 00:06:50.960 [2024-04-15 01:41:36.468673] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:51.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 834: kill: (2044969) - No such process 00:06:51.526 ERROR: process (pid: 2044969) is no longer running 00:06:51.526 01:41:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:51.526 01:41:37 -- common/autotest_common.sh@852 -- # return 1 00:06:51.526 01:41:37 -- common/autotest_common.sh@643 -- # es=1 00:06:51.526 01:41:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:51.526 01:41:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:51.526 01:41:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:51.526 01:41:37 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:51.526 01:41:37 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:51.526 01:41:37 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:51.526 01:41:37 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:51.526 01:41:37 -- event/cpu_locks.sh@141 -- # killprocess 2044831 00:06:51.526 01:41:37 -- common/autotest_common.sh@926 -- # '[' -z 2044831 ']' 00:06:51.526 01:41:37 -- common/autotest_common.sh@930 -- # kill -0 2044831 00:06:51.526 01:41:37 -- common/autotest_common.sh@931 -- # uname 00:06:51.526 01:41:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:51.526 01:41:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2044831 00:06:51.526 01:41:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:51.526 01:41:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:51.526 01:41:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2044831' 00:06:51.526 killing process with pid 2044831 00:06:51.526 01:41:37 -- common/autotest_common.sh@945 -- # kill 2044831 00:06:51.526 01:41:37 -- common/autotest_common.sh@950 -- # wait 2044831 00:06:52.093 00:06:52.093 real 0m2.169s 00:06:52.093 user 0m6.215s 00:06:52.093 sys 0m0.497s 00:06:52.093 01:41:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.093 01:41:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 ************************************ 00:06:52.093 END TEST locking_overlapped_coremask 00:06:52.093 ************************************ 00:06:52.093 01:41:37 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:52.093 01:41:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:52.093 01:41:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.093 01:41:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 ************************************ 00:06:52.093 START TEST locking_overlapped_coremask_via_rpc 00:06:52.093 ************************************ 00:06:52.093 01:41:37 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:52.093 01:41:37 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2045139 00:06:52.093 01:41:37 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:52.093 01:41:37 -- event/cpu_locks.sh@149 -- # waitforlisten 2045139 /var/tmp/spdk.sock 00:06:52.093 01:41:37 -- common/autotest_common.sh@819 -- # '[' -z 2045139 ']' 00:06:52.093 01:41:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:52.093 01:41:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:52.093 01:41:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:52.093 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:52.093 01:41:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:52.093 01:41:37 -- common/autotest_common.sh@10 -- # set +x 00:06:52.093 [2024-04-15 01:41:37.562915] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:52.093 [2024-04-15 01:41:37.562990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045139 ] 00:06:52.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.093 [2024-04-15 01:41:37.621237] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:52.093 [2024-04-15 01:41:37.621273] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:52.093 [2024-04-15 01:41:37.709152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:52.093 [2024-04-15 01:41:37.709330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.093 [2024-04-15 01:41:37.709389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:52.093 [2024-04-15 01:41:37.709392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.029 01:41:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:53.029 01:41:38 -- common/autotest_common.sh@852 -- # return 0 00:06:53.029 01:41:38 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2045277 00:06:53.029 01:41:38 -- event/cpu_locks.sh@153 -- # waitforlisten 2045277 /var/tmp/spdk2.sock 00:06:53.029 01:41:38 -- common/autotest_common.sh@819 -- # '[' -z 2045277 ']' 00:06:53.029 01:41:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.029 01:41:38 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:53.029 01:41:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:53.029 01:41:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.029 01:41:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:53.029 01:41:38 -- common/autotest_common.sh@10 -- # set +x 00:06:53.029 [2024-04-15 01:41:38.568053] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:53.029 [2024-04-15 01:41:38.568146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045277 ] 00:06:53.029 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.029 [2024-04-15 01:41:38.655816] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:53.029 [2024-04-15 01:41:38.655849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.287 [2024-04-15 01:41:38.825723] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:53.287 [2024-04-15 01:41:38.825948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.287 [2024-04-15 01:41:38.829104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:53.287 [2024-04-15 01:41:38.829107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.251 01:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.251 01:41:39 -- common/autotest_common.sh@852 -- # return 0 00:06:54.251 01:41:39 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:54.251 01:41:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.251 01:41:39 -- common/autotest_common.sh@10 -- # set +x 00:06:54.251 01:41:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:54.251 01:41:39 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.251 01:41:39 -- common/autotest_common.sh@640 -- # local es=0 00:06:54.251 01:41:39 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.252 01:41:39 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:54.252 01:41:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.252 01:41:39 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:54.252 01:41:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:54.252 01:41:39 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:54.252 01:41:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:54.252 01:41:39 -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 [2024-04-15 01:41:39.526157] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2045139 has claimed it. 00:06:54.252 request: 00:06:54.252 { 00:06:54.252 "method": "framework_enable_cpumask_locks", 00:06:54.252 "req_id": 1 00:06:54.252 } 00:06:54.252 Got JSON-RPC error response 00:06:54.252 response: 00:06:54.252 { 00:06:54.252 "code": -32603, 00:06:54.252 "message": "Failed to claim CPU core: 2" 00:06:54.252 } 00:06:54.252 01:41:39 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:54.252 01:41:39 -- common/autotest_common.sh@643 -- # es=1 00:06:54.252 01:41:39 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:54.252 01:41:39 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:54.252 01:41:39 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:54.252 01:41:39 -- event/cpu_locks.sh@158 -- # waitforlisten 2045139 /var/tmp/spdk.sock 00:06:54.252 01:41:39 -- common/autotest_common.sh@819 -- # '[' -z 2045139 ']' 00:06:54.252 01:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.252 01:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.252 01:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.252 01:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.252 01:41:39 -- common/autotest_common.sh@10 -- # set +x 00:06:54.252 01:41:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.252 01:41:39 -- common/autotest_common.sh@852 -- # return 0 00:06:54.252 01:41:39 -- event/cpu_locks.sh@159 -- # waitforlisten 2045277 /var/tmp/spdk2.sock 00:06:54.252 01:41:39 -- common/autotest_common.sh@819 -- # '[' -z 2045277 ']' 00:06:54.252 01:41:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.252 01:41:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:54.252 01:41:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.252 01:41:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:54.252 01:41:39 -- common/autotest_common.sh@10 -- # set +x 00:06:54.510 01:41:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:54.510 01:41:40 -- common/autotest_common.sh@852 -- # return 0 00:06:54.510 01:41:40 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:54.510 01:41:40 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:54.510 01:41:40 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:54.510 01:41:40 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:54.510 00:06:54.510 real 0m2.499s 00:06:54.510 user 0m1.214s 00:06:54.510 sys 0m0.210s 00:06:54.510 01:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.510 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:54.510 ************************************ 00:06:54.510 END TEST locking_overlapped_coremask_via_rpc 00:06:54.510 ************************************ 00:06:54.510 01:41:40 -- event/cpu_locks.sh@174 -- # cleanup 00:06:54.510 01:41:40 -- event/cpu_locks.sh@15 -- # [[ -z 2045139 ]] 00:06:54.510 01:41:40 -- event/cpu_locks.sh@15 -- # killprocess 2045139 00:06:54.510 01:41:40 -- common/autotest_common.sh@926 -- # '[' -z 2045139 ']' 00:06:54.510 01:41:40 -- common/autotest_common.sh@930 -- # kill -0 2045139 00:06:54.510 01:41:40 -- common/autotest_common.sh@931 -- # uname 00:06:54.510 01:41:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:54.510 01:41:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2045139 00:06:54.510 01:41:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:54.510 01:41:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:54.510 01:41:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2045139' 00:06:54.510 killing process with pid 2045139 00:06:54.510 01:41:40 -- common/autotest_common.sh@945 -- # kill 2045139 00:06:54.510 01:41:40 -- common/autotest_common.sh@950 -- # wait 2045139 00:06:55.075 01:41:40 -- event/cpu_locks.sh@16 -- # [[ -z 2045277 ]] 00:06:55.075 01:41:40 -- event/cpu_locks.sh@16 -- # killprocess 2045277 00:06:55.075 01:41:40 -- common/autotest_common.sh@926 -- # '[' -z 2045277 ']' 00:06:55.075 01:41:40 -- common/autotest_common.sh@930 -- # kill -0 2045277 00:06:55.075 01:41:40 -- common/autotest_common.sh@931 -- # uname 00:06:55.075 01:41:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:55.075 01:41:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2045277 00:06:55.075 01:41:40 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:55.075 01:41:40 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:55.075 01:41:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2045277' 00:06:55.075 killing process with pid 2045277 00:06:55.075 01:41:40 -- common/autotest_common.sh@945 -- # kill 2045277 00:06:55.075 01:41:40 -- common/autotest_common.sh@950 -- # wait 2045277 00:06:55.333 01:41:40 -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.333 01:41:40 -- event/cpu_locks.sh@1 -- # cleanup 00:06:55.333 01:41:40 -- event/cpu_locks.sh@15 -- # [[ -z 2045139 ]] 00:06:55.333 01:41:40 -- event/cpu_locks.sh@15 -- # killprocess 2045139 00:06:55.333 01:41:40 -- common/autotest_common.sh@926 -- # '[' -z 2045139 ']' 00:06:55.333 01:41:40 -- common/autotest_common.sh@930 -- # kill -0 2045139 00:06:55.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2045139) - No such process 00:06:55.333 01:41:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2045139 is not found' 00:06:55.333 Process with pid 2045139 is not found 00:06:55.333 01:41:40 -- event/cpu_locks.sh@16 -- # [[ -z 2045277 ]] 00:06:55.333 01:41:40 -- event/cpu_locks.sh@16 -- # killprocess 2045277 00:06:55.333 01:41:40 -- common/autotest_common.sh@926 -- # '[' -z 2045277 ']' 00:06:55.333 01:41:40 -- common/autotest_common.sh@930 -- # kill -0 2045277 00:06:55.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2045277) - No such process 00:06:55.333 01:41:40 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2045277 is not found' 00:06:55.333 Process with pid 2045277 is not found 00:06:55.333 01:41:40 -- event/cpu_locks.sh@18 -- # rm -f 00:06:55.333 00:06:55.333 real 0m19.416s 00:06:55.333 user 0m34.603s 00:06:55.333 sys 0m5.539s 00:06:55.333 01:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.333 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.333 ************************************ 00:06:55.333 END TEST cpu_locks 00:06:55.333 ************************************ 00:06:55.333 00:06:55.334 real 0m44.739s 00:06:55.334 user 1m25.135s 00:06:55.334 sys 0m9.463s 00:06:55.334 01:41:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.334 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.334 ************************************ 00:06:55.334 END TEST event 00:06:55.334 ************************************ 00:06:55.334 01:41:40 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.334 01:41:40 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:55.334 01:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.334 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.334 ************************************ 00:06:55.334 START TEST thread 00:06:55.334 ************************************ 00:06:55.334 01:41:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:55.593 * Looking for test storage... 00:06:55.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:55.593 01:41:40 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.593 01:41:40 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:55.593 01:41:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.593 01:41:40 -- common/autotest_common.sh@10 -- # set +x 00:06:55.593 ************************************ 00:06:55.593 START TEST thread_poller_perf 00:06:55.593 ************************************ 00:06:55.593 01:41:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:55.593 [2024-04-15 01:41:41.002393] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:55.593 [2024-04-15 01:41:41.002473] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045655 ] 00:06:55.593 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.593 [2024-04-15 01:41:41.066900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.593 [2024-04-15 01:41:41.160825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.593 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:56.967 ====================================== 00:06:56.967 busy:2709690767 (cyc) 00:06:56.967 total_run_count: 280000 00:06:56.967 tsc_hz: 2700000000 (cyc) 00:06:56.967 ====================================== 00:06:56.967 poller_cost: 9677 (cyc), 3584 (nsec) 00:06:56.967 00:06:56.967 real 0m1.261s 00:06:56.967 user 0m1.170s 00:06:56.967 sys 0m0.085s 00:06:56.967 01:41:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.967 01:41:42 -- common/autotest_common.sh@10 -- # set +x 00:06:56.967 ************************************ 00:06:56.967 END TEST thread_poller_perf 00:06:56.967 ************************************ 00:06:56.967 01:41:42 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.967 01:41:42 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:56.967 01:41:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.967 01:41:42 -- common/autotest_common.sh@10 -- # set +x 00:06:56.967 ************************************ 00:06:56.967 START TEST thread_poller_perf 00:06:56.967 ************************************ 00:06:56.967 01:41:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:56.967 [2024-04-15 01:41:42.288302] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:56.967 [2024-04-15 01:41:42.288386] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2045810 ] 00:06:56.967 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.967 [2024-04-15 01:41:42.353596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.967 [2024-04-15 01:41:42.443725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.967 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:57.902 ====================================== 00:06:57.902 busy:2703617167 (cyc) 00:06:57.902 total_run_count: 3828000 00:06:57.902 tsc_hz: 2700000000 (cyc) 00:06:57.902 ====================================== 00:06:57.902 poller_cost: 706 (cyc), 261 (nsec) 00:06:57.902 00:06:57.902 real 0m1.252s 00:06:57.902 user 0m1.163s 00:06:57.902 sys 0m0.083s 00:06:57.902 01:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.902 01:41:43 -- common/autotest_common.sh@10 -- # set +x 00:06:57.902 ************************************ 00:06:57.902 END TEST thread_poller_perf 00:06:57.902 ************************************ 00:06:58.161 01:41:43 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:58.161 00:06:58.161 real 0m2.612s 00:06:58.161 user 0m2.378s 00:06:58.161 sys 0m0.235s 00:06:58.161 01:41:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.161 01:41:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 ************************************ 00:06:58.161 END TEST thread 00:06:58.161 ************************************ 00:06:58.161 01:41:43 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:58.161 01:41:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:58.161 01:41:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:58.161 01:41:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 ************************************ 00:06:58.161 START TEST accel 00:06:58.161 ************************************ 00:06:58.161 01:41:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:58.161 * Looking for test storage... 00:06:58.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:58.161 01:41:43 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:58.161 01:41:43 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:58.161 01:41:43 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.161 01:41:43 -- accel/accel.sh@59 -- # spdk_tgt_pid=2046004 00:06:58.161 01:41:43 -- accel/accel.sh@60 -- # waitforlisten 2046004 00:06:58.161 01:41:43 -- common/autotest_common.sh@819 -- # '[' -z 2046004 ']' 00:06:58.161 01:41:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.161 01:41:43 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:58.161 01:41:43 -- accel/accel.sh@58 -- # build_accel_config 00:06:58.161 01:41:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:58.161 01:41:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.161 01:41:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.161 01:41:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:58.161 01:41:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.161 01:41:43 -- common/autotest_common.sh@10 -- # set +x 00:06:58.161 01:41:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.161 01:41:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.161 01:41:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.161 01:41:43 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.161 01:41:43 -- accel/accel.sh@42 -- # jq -r . 00:06:58.161 [2024-04-15 01:41:43.673826] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:58.161 [2024-04-15 01:41:43.673897] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046004 ] 00:06:58.161 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.161 [2024-04-15 01:41:43.734851] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.420 [2024-04-15 01:41:43.823131] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:58.420 [2024-04-15 01:41:43.823304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.986 01:41:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:58.986 01:41:44 -- common/autotest_common.sh@852 -- # return 0 00:06:58.986 01:41:44 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:58.986 01:41:44 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:58.986 01:41:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:58.986 01:41:44 -- common/autotest_common.sh@10 -- # set +x 00:06:58.986 01:41:44 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:58.986 01:41:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:59.244 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # IFS== 00:06:59.245 01:41:44 -- accel/accel.sh@64 -- # read -r opc module 00:06:59.245 01:41:44 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:59.245 01:41:44 -- accel/accel.sh@67 -- # killprocess 2046004 00:06:59.245 01:41:44 -- common/autotest_common.sh@926 -- # '[' -z 2046004 ']' 00:06:59.245 01:41:44 -- common/autotest_common.sh@930 -- # kill -0 2046004 00:06:59.245 01:41:44 -- common/autotest_common.sh@931 -- # uname 00:06:59.245 01:41:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:59.245 01:41:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2046004 00:06:59.245 01:41:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:59.245 01:41:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:59.245 01:41:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2046004' 00:06:59.245 killing process with pid 2046004 00:06:59.245 01:41:44 -- common/autotest_common.sh@945 -- # kill 2046004 00:06:59.245 01:41:44 -- common/autotest_common.sh@950 -- # wait 2046004 00:06:59.504 01:41:45 -- accel/accel.sh@68 -- # trap - ERR 00:06:59.504 01:41:45 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:59.504 01:41:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:59.504 01:41:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.504 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:06:59.504 01:41:45 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:59.504 01:41:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:59.504 01:41:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.504 01:41:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.504 01:41:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.504 01:41:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.504 01:41:45 -- accel/accel.sh@42 -- # jq -r . 00:06:59.504 01:41:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.504 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:06:59.504 01:41:45 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:59.504 01:41:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:59.504 01:41:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.504 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:06:59.504 ************************************ 00:06:59.504 START TEST accel_missing_filename 00:06:59.504 ************************************ 00:06:59.504 01:41:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:59.504 01:41:45 -- common/autotest_common.sh@640 -- # local es=0 00:06:59.504 01:41:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:59.504 01:41:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:59.504 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.504 01:41:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:59.504 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:59.504 01:41:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:59.504 01:41:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:59.504 01:41:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.504 01:41:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.504 01:41:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.504 01:41:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.504 01:41:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.504 01:41:45 -- accel/accel.sh@42 -- # jq -r . 00:06:59.762 [2024-04-15 01:41:45.159343] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:06:59.762 [2024-04-15 01:41:45.159423] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046299 ] 00:06:59.762 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.762 [2024-04-15 01:41:45.221393] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.762 [2024-04-15 01:41:45.310757] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.762 [2024-04-15 01:41:45.370384] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.020 [2024-04-15 01:41:45.445992] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:00.020 A filename is required. 00:07:00.021 01:41:45 -- common/autotest_common.sh@643 -- # es=234 00:07:00.021 01:41:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.021 01:41:45 -- common/autotest_common.sh@652 -- # es=106 00:07:00.021 01:41:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:00.021 01:41:45 -- common/autotest_common.sh@660 -- # es=1 00:07:00.021 01:41:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.021 00:07:00.021 real 0m0.383s 00:07:00.021 user 0m0.272s 00:07:00.021 sys 0m0.144s 00:07:00.021 01:41:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.021 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.021 ************************************ 00:07:00.021 END TEST accel_missing_filename 00:07:00.021 ************************************ 00:07:00.021 01:41:45 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.021 01:41:45 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:00.021 01:41:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.021 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.021 ************************************ 00:07:00.021 START TEST accel_compress_verify 00:07:00.021 ************************************ 00:07:00.021 01:41:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.021 01:41:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:00.021 01:41:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.021 01:41:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:00.021 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.021 01:41:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:00.021 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.021 01:41:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.021 01:41:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:00.021 01:41:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.021 01:41:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.021 01:41:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.021 01:41:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.021 01:41:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.021 01:41:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.021 01:41:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.021 01:41:45 -- accel/accel.sh@42 -- # jq -r . 00:07:00.021 [2024-04-15 01:41:45.564364] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:00.021 [2024-04-15 01:41:45.564445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046328 ] 00:07:00.021 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.021 [2024-04-15 01:41:45.625533] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.279 [2024-04-15 01:41:45.717828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.279 [2024-04-15 01:41:45.779394] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:00.279 [2024-04-15 01:41:45.867935] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:00.538 00:07:00.538 Compression does not support the verify option, aborting. 00:07:00.538 01:41:45 -- common/autotest_common.sh@643 -- # es=161 00:07:00.538 01:41:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.538 01:41:45 -- common/autotest_common.sh@652 -- # es=33 00:07:00.538 01:41:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:00.538 01:41:45 -- common/autotest_common.sh@660 -- # es=1 00:07:00.538 01:41:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.538 00:07:00.538 real 0m0.401s 00:07:00.538 user 0m0.287s 00:07:00.538 sys 0m0.143s 00:07:00.538 01:41:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.538 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 END TEST accel_compress_verify 00:07:00.538 ************************************ 00:07:00.538 01:41:45 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:00.538 01:41:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:00.538 01:41:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.538 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 START TEST accel_wrong_workload 00:07:00.538 ************************************ 00:07:00.538 01:41:45 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:07:00.538 01:41:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:00.538 01:41:45 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:00.538 01:41:45 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:00.538 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.538 01:41:45 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:00.538 01:41:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.538 01:41:45 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:07:00.538 01:41:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:00.538 01:41:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.538 01:41:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.538 01:41:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.538 01:41:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.538 01:41:45 -- accel/accel.sh@42 -- # jq -r . 00:07:00.538 Unsupported workload type: foobar 00:07:00.538 [2024-04-15 01:41:45.986462] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:00.538 accel_perf options: 00:07:00.538 [-h help message] 00:07:00.538 [-q queue depth per core] 00:07:00.538 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.538 [-T number of threads per core 00:07:00.538 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.538 [-t time in seconds] 00:07:00.538 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.538 [ dif_verify, , dif_generate, dif_generate_copy 00:07:00.538 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.538 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.538 [-S for crc32c workload, use this seed value (default 0) 00:07:00.538 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.538 [-f for fill workload, use this BYTE value (default 255) 00:07:00.538 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.538 [-y verify result if this switch is on] 00:07:00.538 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.538 Can be used to spread operations across a wider range of memory. 00:07:00.538 01:41:45 -- common/autotest_common.sh@643 -- # es=1 00:07:00.538 01:41:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.538 01:41:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:00.538 01:41:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.538 00:07:00.538 real 0m0.019s 00:07:00.538 user 0m0.012s 00:07:00.538 sys 0m0.007s 00:07:00.538 01:41:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.538 01:41:45 -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 END TEST accel_wrong_workload 00:07:00.538 ************************************ 00:07:00.538 Error: writing output failed: Broken pipe 00:07:00.538 01:41:46 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.538 01:41:46 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:07:00.538 01:41:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.538 01:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.538 ************************************ 00:07:00.538 START TEST accel_negative_buffers 00:07:00.538 ************************************ 00:07:00.538 01:41:46 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:00.538 01:41:46 -- common/autotest_common.sh@640 -- # local es=0 00:07:00.538 01:41:46 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:00.538 01:41:46 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:07:00.538 01:41:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.538 01:41:46 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:07:00.538 01:41:46 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:00.538 01:41:46 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:07:00.538 01:41:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:00.538 01:41:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.538 01:41:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.538 01:41:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.538 01:41:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.538 01:41:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.538 01:41:46 -- accel/accel.sh@42 -- # jq -r . 00:07:00.538 -x option must be non-negative. 00:07:00.538 [2024-04-15 01:41:46.035019] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:00.538 accel_perf options: 00:07:00.538 [-h help message] 00:07:00.538 [-q queue depth per core] 00:07:00.538 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:00.538 [-T number of threads per core 00:07:00.538 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:00.538 [-t time in seconds] 00:07:00.538 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:00.538 [ dif_verify, , dif_generate, dif_generate_copy 00:07:00.538 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:00.538 [-l for compress/decompress workloads, name of uncompressed input file 00:07:00.538 [-S for crc32c workload, use this seed value (default 0) 00:07:00.538 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:00.538 [-f for fill workload, use this BYTE value (default 255) 00:07:00.538 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:00.538 [-y verify result if this switch is on] 00:07:00.538 [-a tasks to allocate per core (default: same value as -q)] 00:07:00.538 Can be used to spread operations across a wider range of memory. 00:07:00.538 01:41:46 -- common/autotest_common.sh@643 -- # es=1 00:07:00.538 01:41:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:00.538 01:41:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:00.538 01:41:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:00.538 00:07:00.538 real 0m0.023s 00:07:00.538 user 0m0.017s 00:07:00.538 sys 0m0.006s 00:07:00.538 01:41:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:00.539 01:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 ************************************ 00:07:00.539 END TEST accel_negative_buffers 00:07:00.539 ************************************ 00:07:00.539 Error: writing output failed: Broken pipe 00:07:00.539 01:41:46 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:00.539 01:41:46 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:00.539 01:41:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:00.539 01:41:46 -- common/autotest_common.sh@10 -- # set +x 00:07:00.539 ************************************ 00:07:00.539 START TEST accel_crc32c 00:07:00.539 ************************************ 00:07:00.539 01:41:46 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:00.539 01:41:46 -- accel/accel.sh@16 -- # local accel_opc 00:07:00.539 01:41:46 -- accel/accel.sh@17 -- # local accel_module 00:07:00.539 01:41:46 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:00.539 01:41:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:00.539 01:41:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.539 01:41:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.539 01:41:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.539 01:41:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.539 01:41:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.539 01:41:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.539 01:41:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.539 01:41:46 -- accel/accel.sh@42 -- # jq -r . 00:07:00.539 [2024-04-15 01:41:46.076469] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:00.539 [2024-04-15 01:41:46.076534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046389 ] 00:07:00.539 EAL: No free 2048 kB hugepages reported on node 1 00:07:00.539 [2024-04-15 01:41:46.140872] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.797 [2024-04-15 01:41:46.233746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.171 01:41:47 -- accel/accel.sh@18 -- # out=' 00:07:02.171 SPDK Configuration: 00:07:02.171 Core mask: 0x1 00:07:02.171 00:07:02.171 Accel Perf Configuration: 00:07:02.171 Workload Type: crc32c 00:07:02.171 CRC-32C seed: 32 00:07:02.171 Transfer size: 4096 bytes 00:07:02.171 Vector count 1 00:07:02.171 Module: software 00:07:02.171 Queue depth: 32 00:07:02.171 Allocate depth: 32 00:07:02.171 # threads/core: 1 00:07:02.171 Run time: 1 seconds 00:07:02.171 Verify: Yes 00:07:02.171 00:07:02.171 Running for 1 seconds... 00:07:02.171 00:07:02.171 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:02.171 ------------------------------------------------------------------------------------ 00:07:02.171 0,0 411104/s 1605 MiB/s 0 0 00:07:02.171 ==================================================================================== 00:07:02.171 Total 411104/s 1605 MiB/s 0 0' 00:07:02.171 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.171 01:41:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:02.171 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.171 01:41:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:02.171 01:41:47 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.171 01:41:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.171 01:41:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.171 01:41:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.171 01:41:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.171 01:41:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.171 01:41:47 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.171 01:41:47 -- accel/accel.sh@42 -- # jq -r . 00:07:02.171 [2024-04-15 01:41:47.477174] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:02.172 [2024-04-15 01:41:47.477246] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046646 ] 00:07:02.172 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.172 [2024-04-15 01:41:47.538516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.172 [2024-04-15 01:41:47.627509] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=0x1 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=crc32c 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=32 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=software 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@23 -- # accel_module=software 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=32 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=32 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=1 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val=Yes 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 01:41:47 -- accel/accel.sh@21 -- # val= 00:07:02.172 01:41:47 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 01:41:47 -- accel/accel.sh@20 -- # read -r var val 00:07:03.546 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.546 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.546 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.546 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.546 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.546 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.546 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.546 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.546 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.547 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.547 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.547 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.547 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.547 01:41:48 -- accel/accel.sh@21 -- # val= 00:07:03.547 01:41:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.547 01:41:48 -- accel/accel.sh@20 -- # IFS=: 00:07:03.547 01:41:48 -- accel/accel.sh@20 -- # read -r var val 00:07:03.547 01:41:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:03.547 01:41:48 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:03.547 01:41:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.547 00:07:03.547 real 0m2.791s 00:07:03.547 user 0m2.504s 00:07:03.547 sys 0m0.281s 00:07:03.547 01:41:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.547 01:41:48 -- common/autotest_common.sh@10 -- # set +x 00:07:03.547 ************************************ 00:07:03.547 END TEST accel_crc32c 00:07:03.547 ************************************ 00:07:03.547 01:41:48 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:03.547 01:41:48 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:03.547 01:41:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:03.547 01:41:48 -- common/autotest_common.sh@10 -- # set +x 00:07:03.547 ************************************ 00:07:03.547 START TEST accel_crc32c_C2 00:07:03.547 ************************************ 00:07:03.547 01:41:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:03.547 01:41:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.547 01:41:48 -- accel/accel.sh@17 -- # local accel_module 00:07:03.547 01:41:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:03.547 01:41:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:03.547 01:41:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.547 01:41:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.547 01:41:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.547 01:41:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.547 01:41:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.547 01:41:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.547 01:41:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.547 01:41:48 -- accel/accel.sh@42 -- # jq -r . 00:07:03.547 [2024-04-15 01:41:48.895192] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:03.547 [2024-04-15 01:41:48.895282] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046810 ] 00:07:03.547 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.547 [2024-04-15 01:41:48.956221] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.547 [2024-04-15 01:41:49.046495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.920 01:41:50 -- accel/accel.sh@18 -- # out=' 00:07:04.920 SPDK Configuration: 00:07:04.920 Core mask: 0x1 00:07:04.920 00:07:04.920 Accel Perf Configuration: 00:07:04.920 Workload Type: crc32c 00:07:04.920 CRC-32C seed: 0 00:07:04.920 Transfer size: 4096 bytes 00:07:04.920 Vector count 2 00:07:04.920 Module: software 00:07:04.920 Queue depth: 32 00:07:04.920 Allocate depth: 32 00:07:04.920 # threads/core: 1 00:07:04.920 Run time: 1 seconds 00:07:04.920 Verify: Yes 00:07:04.920 00:07:04.920 Running for 1 seconds... 00:07:04.920 00:07:04.920 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:04.920 ------------------------------------------------------------------------------------ 00:07:04.920 0,0 322112/s 2516 MiB/s 0 0 00:07:04.920 ==================================================================================== 00:07:04.921 Total 322112/s 1258 MiB/s 0 0' 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:04.921 01:41:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.921 01:41:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.921 01:41:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.921 01:41:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.921 01:41:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.921 01:41:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.921 01:41:50 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.921 01:41:50 -- accel/accel.sh@42 -- # jq -r . 00:07:04.921 [2024-04-15 01:41:50.292459] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:04.921 [2024-04-15 01:41:50.292529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2046955 ] 00:07:04.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.921 [2024-04-15 01:41:50.355501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.921 [2024-04-15 01:41:50.447342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=0x1 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=crc32c 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=0 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=software 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=32 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=32 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=1 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val=Yes 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:04.921 01:41:50 -- accel/accel.sh@21 -- # val= 00:07:04.921 01:41:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # IFS=: 00:07:04.921 01:41:50 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@21 -- # val= 00:07:06.295 01:41:51 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # IFS=: 00:07:06.295 01:41:51 -- accel/accel.sh@20 -- # read -r var val 00:07:06.295 01:41:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:06.295 01:41:51 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:06.295 01:41:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.295 00:07:06.295 real 0m2.808s 00:07:06.295 user 0m2.518s 00:07:06.295 sys 0m0.284s 00:07:06.295 01:41:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.295 01:41:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.295 ************************************ 00:07:06.295 END TEST accel_crc32c_C2 00:07:06.295 ************************************ 00:07:06.295 01:41:51 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:06.295 01:41:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:06.295 01:41:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.295 01:41:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.295 ************************************ 00:07:06.295 START TEST accel_copy 00:07:06.295 ************************************ 00:07:06.295 01:41:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:07:06.295 01:41:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.295 01:41:51 -- accel/accel.sh@17 -- # local accel_module 00:07:06.295 01:41:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:06.295 01:41:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:06.295 01:41:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.295 01:41:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.295 01:41:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.295 01:41:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.295 01:41:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.295 01:41:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.295 01:41:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.295 01:41:51 -- accel/accel.sh@42 -- # jq -r . 00:07:06.295 [2024-04-15 01:41:51.724917] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:06.295 [2024-04-15 01:41:51.724990] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047110 ] 00:07:06.295 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.295 [2024-04-15 01:41:51.786069] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.295 [2024-04-15 01:41:51.875914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.667 01:41:53 -- accel/accel.sh@18 -- # out=' 00:07:07.667 SPDK Configuration: 00:07:07.667 Core mask: 0x1 00:07:07.667 00:07:07.667 Accel Perf Configuration: 00:07:07.668 Workload Type: copy 00:07:07.668 Transfer size: 4096 bytes 00:07:07.668 Vector count 1 00:07:07.668 Module: software 00:07:07.668 Queue depth: 32 00:07:07.668 Allocate depth: 32 00:07:07.668 # threads/core: 1 00:07:07.668 Run time: 1 seconds 00:07:07.668 Verify: Yes 00:07:07.668 00:07:07.668 Running for 1 seconds... 00:07:07.668 00:07:07.668 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:07.668 ------------------------------------------------------------------------------------ 00:07:07.668 0,0 281280/s 1098 MiB/s 0 0 00:07:07.668 ==================================================================================== 00:07:07.668 Total 281280/s 1098 MiB/s 0 0' 00:07:07.668 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.668 01:41:53 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:07.668 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.668 01:41:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:07.668 01:41:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.668 01:41:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.668 01:41:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.668 01:41:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.668 01:41:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.668 01:41:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.668 01:41:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.668 01:41:53 -- accel/accel.sh@42 -- # jq -r . 00:07:07.668 [2024-04-15 01:41:53.114852] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:07.668 [2024-04-15 01:41:53.114929] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047374 ] 00:07:07.668 EAL: No free 2048 kB hugepages reported on node 1 00:07:07.668 [2024-04-15 01:41:53.178888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.668 [2024-04-15 01:41:53.268630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=0x1 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=copy 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=software 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=32 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=32 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=1 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val=Yes 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:07.926 01:41:53 -- accel/accel.sh@21 -- # val= 00:07:07.926 01:41:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # IFS=: 00:07:07.926 01:41:53 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@21 -- # val= 00:07:08.859 01:41:54 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # IFS=: 00:07:08.859 01:41:54 -- accel/accel.sh@20 -- # read -r var val 00:07:08.859 01:41:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.859 01:41:54 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:08.859 01:41:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.859 00:07:08.859 real 0m2.793s 00:07:08.859 user 0m2.488s 00:07:08.859 sys 0m0.297s 00:07:08.859 01:41:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.859 01:41:54 -- common/autotest_common.sh@10 -- # set +x 00:07:08.859 ************************************ 00:07:08.859 END TEST accel_copy 00:07:08.859 ************************************ 00:07:09.118 01:41:54 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.118 01:41:54 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:09.118 01:41:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:09.118 01:41:54 -- common/autotest_common.sh@10 -- # set +x 00:07:09.118 ************************************ 00:07:09.118 START TEST accel_fill 00:07:09.118 ************************************ 00:07:09.118 01:41:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.118 01:41:54 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.118 01:41:54 -- accel/accel.sh@17 -- # local accel_module 00:07:09.118 01:41:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.118 01:41:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.118 01:41:54 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.118 01:41:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:09.118 01:41:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.118 01:41:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.118 01:41:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:09.118 01:41:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:09.118 01:41:54 -- accel/accel.sh@41 -- # local IFS=, 00:07:09.118 01:41:54 -- accel/accel.sh@42 -- # jq -r . 00:07:09.118 [2024-04-15 01:41:54.540494] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:09.118 [2024-04-15 01:41:54.540559] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047535 ] 00:07:09.118 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.118 [2024-04-15 01:41:54.603893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.118 [2024-04-15 01:41:54.693842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.539 01:41:55 -- accel/accel.sh@18 -- # out=' 00:07:10.539 SPDK Configuration: 00:07:10.539 Core mask: 0x1 00:07:10.540 00:07:10.540 Accel Perf Configuration: 00:07:10.540 Workload Type: fill 00:07:10.540 Fill pattern: 0x80 00:07:10.540 Transfer size: 4096 bytes 00:07:10.540 Vector count 1 00:07:10.540 Module: software 00:07:10.540 Queue depth: 64 00:07:10.540 Allocate depth: 64 00:07:10.540 # threads/core: 1 00:07:10.540 Run time: 1 seconds 00:07:10.540 Verify: Yes 00:07:10.540 00:07:10.540 Running for 1 seconds... 00:07:10.540 00:07:10.540 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.540 ------------------------------------------------------------------------------------ 00:07:10.540 0,0 403840/s 1577 MiB/s 0 0 00:07:10.540 ==================================================================================== 00:07:10.540 Total 403840/s 1577 MiB/s 0 0' 00:07:10.540 01:41:55 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.540 01:41:55 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:10.540 01:41:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.540 01:41:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.540 01:41:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.540 01:41:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.540 01:41:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.540 01:41:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.540 01:41:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.540 01:41:55 -- accel/accel.sh@42 -- # jq -r . 00:07:10.540 [2024-04-15 01:41:55.942689] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:10.540 [2024-04-15 01:41:55.942756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047679 ] 00:07:10.540 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.540 [2024-04-15 01:41:56.003587] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.540 [2024-04-15 01:41:56.093615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=0x1 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=fill 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=0x80 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=software 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=64 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=64 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=1 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val=Yes 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:10.540 01:41:56 -- accel/accel.sh@21 -- # val= 00:07:10.540 01:41:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # IFS=: 00:07:10.540 01:41:56 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@21 -- # val= 00:07:11.915 01:41:57 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # IFS=: 00:07:11.915 01:41:57 -- accel/accel.sh@20 -- # read -r var val 00:07:11.915 01:41:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.915 01:41:57 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:11.915 01:41:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.915 00:07:11.915 real 0m2.807s 00:07:11.915 user 0m2.514s 00:07:11.915 sys 0m0.285s 00:07:11.915 01:41:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.915 01:41:57 -- common/autotest_common.sh@10 -- # set +x 00:07:11.915 ************************************ 00:07:11.915 END TEST accel_fill 00:07:11.915 ************************************ 00:07:11.915 01:41:57 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:11.915 01:41:57 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:11.915 01:41:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.915 01:41:57 -- common/autotest_common.sh@10 -- # set +x 00:07:11.915 ************************************ 00:07:11.915 START TEST accel_copy_crc32c 00:07:11.915 ************************************ 00:07:11.915 01:41:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:11.915 01:41:57 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.915 01:41:57 -- accel/accel.sh@17 -- # local accel_module 00:07:11.915 01:41:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:11.915 01:41:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:11.915 01:41:57 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.915 01:41:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.915 01:41:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.915 01:41:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.915 01:41:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.915 01:41:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.915 01:41:57 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.915 01:41:57 -- accel/accel.sh@42 -- # jq -r . 00:07:11.915 [2024-04-15 01:41:57.371788] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:11.915 [2024-04-15 01:41:57.371866] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2047838 ] 00:07:11.915 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.915 [2024-04-15 01:41:57.433683] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.915 [2024-04-15 01:41:57.527345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.290 01:41:58 -- accel/accel.sh@18 -- # out=' 00:07:13.290 SPDK Configuration: 00:07:13.290 Core mask: 0x1 00:07:13.290 00:07:13.290 Accel Perf Configuration: 00:07:13.290 Workload Type: copy_crc32c 00:07:13.290 CRC-32C seed: 0 00:07:13.290 Vector size: 4096 bytes 00:07:13.290 Transfer size: 4096 bytes 00:07:13.290 Vector count 1 00:07:13.290 Module: software 00:07:13.290 Queue depth: 32 00:07:13.290 Allocate depth: 32 00:07:13.290 # threads/core: 1 00:07:13.290 Run time: 1 seconds 00:07:13.290 Verify: Yes 00:07:13.290 00:07:13.290 Running for 1 seconds... 00:07:13.290 00:07:13.290 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.290 ------------------------------------------------------------------------------------ 00:07:13.290 0,0 217664/s 850 MiB/s 0 0 00:07:13.290 ==================================================================================== 00:07:13.290 Total 217664/s 850 MiB/s 0 0' 00:07:13.290 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.290 01:41:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:13.290 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.290 01:41:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:13.290 01:41:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.290 01:41:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.290 01:41:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.290 01:41:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.290 01:41:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.290 01:41:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.290 01:41:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.290 01:41:58 -- accel/accel.sh@42 -- # jq -r . 00:07:13.290 [2024-04-15 01:41:58.770121] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:13.290 [2024-04-15 01:41:58.770193] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048082 ] 00:07:13.290 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.290 [2024-04-15 01:41:58.831610] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.290 [2024-04-15 01:41:58.921727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=0x1 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=0 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=software 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=32 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=32 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=1 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val=Yes 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:13.548 01:41:58 -- accel/accel.sh@21 -- # val= 00:07:13.548 01:41:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # IFS=: 00:07:13.548 01:41:58 -- accel/accel.sh@20 -- # read -r var val 00:07:14.920 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@21 -- # val= 00:07:14.921 01:42:00 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # IFS=: 00:07:14.921 01:42:00 -- accel/accel.sh@20 -- # read -r var val 00:07:14.921 01:42:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.921 01:42:00 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:14.921 01:42:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.921 00:07:14.921 real 0m2.795s 00:07:14.921 user 0m2.500s 00:07:14.921 sys 0m0.288s 00:07:14.921 01:42:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.921 01:42:00 -- common/autotest_common.sh@10 -- # set +x 00:07:14.921 ************************************ 00:07:14.921 END TEST accel_copy_crc32c 00:07:14.921 ************************************ 00:07:14.921 01:42:00 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.921 01:42:00 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:14.921 01:42:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.921 01:42:00 -- common/autotest_common.sh@10 -- # set +x 00:07:14.921 ************************************ 00:07:14.921 START TEST accel_copy_crc32c_C2 00:07:14.921 ************************************ 00:07:14.921 01:42:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:14.921 01:42:00 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.921 01:42:00 -- accel/accel.sh@17 -- # local accel_module 00:07:14.921 01:42:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:14.921 01:42:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:14.921 01:42:00 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.921 01:42:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.921 01:42:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.921 01:42:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.921 01:42:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.921 01:42:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.921 01:42:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.921 01:42:00 -- accel/accel.sh@42 -- # jq -r . 00:07:14.921 [2024-04-15 01:42:00.193353] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:14.921 [2024-04-15 01:42:00.193442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048276 ] 00:07:14.921 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.921 [2024-04-15 01:42:00.256239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.921 [2024-04-15 01:42:00.350573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.294 01:42:01 -- accel/accel.sh@18 -- # out=' 00:07:16.294 SPDK Configuration: 00:07:16.294 Core mask: 0x1 00:07:16.294 00:07:16.294 Accel Perf Configuration: 00:07:16.294 Workload Type: copy_crc32c 00:07:16.294 CRC-32C seed: 0 00:07:16.294 Vector size: 4096 bytes 00:07:16.294 Transfer size: 8192 bytes 00:07:16.294 Vector count 2 00:07:16.294 Module: software 00:07:16.294 Queue depth: 32 00:07:16.294 Allocate depth: 32 00:07:16.294 # threads/core: 1 00:07:16.294 Run time: 1 seconds 00:07:16.294 Verify: Yes 00:07:16.294 00:07:16.294 Running for 1 seconds... 00:07:16.294 00:07:16.294 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.294 ------------------------------------------------------------------------------------ 00:07:16.294 0,0 155136/s 1212 MiB/s 0 0 00:07:16.294 ==================================================================================== 00:07:16.294 Total 155136/s 606 MiB/s 0 0' 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:16.294 01:42:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.294 01:42:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.294 01:42:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.294 01:42:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.294 01:42:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.294 01:42:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.294 01:42:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.294 01:42:01 -- accel/accel.sh@42 -- # jq -r . 00:07:16.294 [2024-04-15 01:42:01.581945] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:16.294 [2024-04-15 01:42:01.582009] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048498 ] 00:07:16.294 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.294 [2024-04-15 01:42:01.643852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.294 [2024-04-15 01:42:01.732165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val=0x1 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val=0 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.294 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.294 01:42:01 -- accel/accel.sh@21 -- # val=software 00:07:16.294 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val=32 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val=32 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val=1 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val=Yes 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:16.295 01:42:01 -- accel/accel.sh@21 -- # val= 00:07:16.295 01:42:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # IFS=: 00:07:16.295 01:42:01 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@21 -- # val= 00:07:17.667 01:42:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # IFS=: 00:07:17.667 01:42:02 -- accel/accel.sh@20 -- # read -r var val 00:07:17.667 01:42:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.667 01:42:02 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:17.667 01:42:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.667 00:07:17.667 real 0m2.795s 00:07:17.667 user 0m2.507s 00:07:17.667 sys 0m0.280s 00:07:17.667 01:42:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.667 01:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:17.667 ************************************ 00:07:17.667 END TEST accel_copy_crc32c_C2 00:07:17.667 ************************************ 00:07:17.667 01:42:02 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:17.667 01:42:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.667 01:42:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.667 01:42:02 -- common/autotest_common.sh@10 -- # set +x 00:07:17.667 ************************************ 00:07:17.667 START TEST accel_dualcast 00:07:17.667 ************************************ 00:07:17.667 01:42:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:17.667 01:42:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.667 01:42:02 -- accel/accel.sh@17 -- # local accel_module 00:07:17.667 01:42:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:17.667 01:42:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:17.667 01:42:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.667 01:42:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.667 01:42:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.667 01:42:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.667 01:42:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.667 01:42:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.667 01:42:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.667 01:42:02 -- accel/accel.sh@42 -- # jq -r . 00:07:17.667 [2024-04-15 01:42:03.008648] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:17.667 [2024-04-15 01:42:03.008727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048672 ] 00:07:17.667 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.667 [2024-04-15 01:42:03.072577] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.667 [2024-04-15 01:42:03.169108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.041 01:42:04 -- accel/accel.sh@18 -- # out=' 00:07:19.041 SPDK Configuration: 00:07:19.041 Core mask: 0x1 00:07:19.041 00:07:19.042 Accel Perf Configuration: 00:07:19.042 Workload Type: dualcast 00:07:19.042 Transfer size: 4096 bytes 00:07:19.042 Vector count 1 00:07:19.042 Module: software 00:07:19.042 Queue depth: 32 00:07:19.042 Allocate depth: 32 00:07:19.042 # threads/core: 1 00:07:19.042 Run time: 1 seconds 00:07:19.042 Verify: Yes 00:07:19.042 00:07:19.042 Running for 1 seconds... 00:07:19.042 00:07:19.042 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.042 ------------------------------------------------------------------------------------ 00:07:19.042 0,0 300192/s 1172 MiB/s 0 0 00:07:19.042 ==================================================================================== 00:07:19.042 Total 300192/s 1172 MiB/s 0 0' 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:19.042 01:42:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.042 01:42:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.042 01:42:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.042 01:42:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.042 01:42:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.042 01:42:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.042 01:42:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.042 01:42:04 -- accel/accel.sh@42 -- # jq -r . 00:07:19.042 [2024-04-15 01:42:04.405882] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:19.042 [2024-04-15 01:42:04.405947] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2048921 ] 00:07:19.042 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.042 [2024-04-15 01:42:04.467034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.042 [2024-04-15 01:42:04.557206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=0x1 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=dualcast 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=software 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=32 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=32 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=1 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val=Yes 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:19.042 01:42:04 -- accel/accel.sh@21 -- # val= 00:07:19.042 01:42:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # IFS=: 00:07:19.042 01:42:04 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@21 -- # val= 00:07:20.415 01:42:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # IFS=: 00:07:20.415 01:42:05 -- accel/accel.sh@20 -- # read -r var val 00:07:20.415 01:42:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.415 01:42:05 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:20.415 01:42:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.415 00:07:20.415 real 0m2.804s 00:07:20.415 user 0m2.515s 00:07:20.415 sys 0m0.279s 00:07:20.415 01:42:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.415 01:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:20.415 ************************************ 00:07:20.415 END TEST accel_dualcast 00:07:20.415 ************************************ 00:07:20.415 01:42:05 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:20.415 01:42:05 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:20.415 01:42:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.415 01:42:05 -- common/autotest_common.sh@10 -- # set +x 00:07:20.415 ************************************ 00:07:20.415 START TEST accel_compare 00:07:20.415 ************************************ 00:07:20.415 01:42:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:20.415 01:42:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.415 01:42:05 -- accel/accel.sh@17 -- # local accel_module 00:07:20.415 01:42:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:20.415 01:42:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:20.415 01:42:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.415 01:42:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.415 01:42:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.415 01:42:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.415 01:42:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.415 01:42:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.415 01:42:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.415 01:42:05 -- accel/accel.sh@42 -- # jq -r . 00:07:20.415 [2024-04-15 01:42:05.836011] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:20.415 [2024-04-15 01:42:05.836110] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049090 ] 00:07:20.415 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.415 [2024-04-15 01:42:05.897937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.415 [2024-04-15 01:42:05.988792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.788 01:42:07 -- accel/accel.sh@18 -- # out=' 00:07:21.788 SPDK Configuration: 00:07:21.788 Core mask: 0x1 00:07:21.788 00:07:21.788 Accel Perf Configuration: 00:07:21.788 Workload Type: compare 00:07:21.788 Transfer size: 4096 bytes 00:07:21.788 Vector count 1 00:07:21.788 Module: software 00:07:21.788 Queue depth: 32 00:07:21.788 Allocate depth: 32 00:07:21.788 # threads/core: 1 00:07:21.788 Run time: 1 seconds 00:07:21.788 Verify: Yes 00:07:21.788 00:07:21.788 Running for 1 seconds... 00:07:21.788 00:07:21.788 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:21.788 ------------------------------------------------------------------------------------ 00:07:21.788 0,0 400096/s 1562 MiB/s 0 0 00:07:21.788 ==================================================================================== 00:07:21.788 Total 400096/s 1562 MiB/s 0 0' 00:07:21.788 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.788 01:42:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:21.788 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.788 01:42:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:21.788 01:42:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:21.788 01:42:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:21.788 01:42:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.788 01:42:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.788 01:42:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:21.788 01:42:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:21.788 01:42:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:21.788 01:42:07 -- accel/accel.sh@42 -- # jq -r . 00:07:21.788 [2024-04-15 01:42:07.221839] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:21.788 [2024-04-15 01:42:07.221903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049236 ] 00:07:21.788 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.788 [2024-04-15 01:42:07.283413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.788 [2024-04-15 01:42:07.371714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.788 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:21.788 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.788 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.788 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.788 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:21.788 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=0x1 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=compare 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=software 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=32 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=32 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=1 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val=Yes 00:07:21.789 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:21.789 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:21.789 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:22.046 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.046 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.046 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.046 01:42:07 -- accel/accel.sh@21 -- # val= 00:07:22.046 01:42:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.046 01:42:07 -- accel/accel.sh@20 -- # IFS=: 00:07:22.046 01:42:07 -- accel/accel.sh@20 -- # read -r var val 00:07:22.981 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.981 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.981 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.981 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.981 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.981 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.981 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.981 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.981 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.982 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.982 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.982 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.982 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.982 01:42:08 -- accel/accel.sh@21 -- # val= 00:07:22.982 01:42:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.982 01:42:08 -- accel/accel.sh@20 -- # IFS=: 00:07:22.982 01:42:08 -- accel/accel.sh@20 -- # read -r var val 00:07:22.982 01:42:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:22.982 01:42:08 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:22.982 01:42:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.982 00:07:22.982 real 0m2.777s 00:07:22.982 user 0m2.487s 00:07:22.982 sys 0m0.280s 00:07:22.982 01:42:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.982 01:42:08 -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 ************************************ 00:07:22.982 END TEST accel_compare 00:07:22.982 ************************************ 00:07:22.982 01:42:08 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:22.982 01:42:08 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:22.982 01:42:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.982 01:42:08 -- common/autotest_common.sh@10 -- # set +x 00:07:22.982 ************************************ 00:07:22.982 START TEST accel_xor 00:07:22.982 ************************************ 00:07:22.982 01:42:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:22.982 01:42:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.982 01:42:08 -- accel/accel.sh@17 -- # local accel_module 00:07:22.982 01:42:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:22.982 01:42:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:22.982 01:42:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.982 01:42:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.982 01:42:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.982 01:42:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.982 01:42:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.982 01:42:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.982 01:42:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.982 01:42:08 -- accel/accel.sh@42 -- # jq -r . 00:07:23.240 [2024-04-15 01:42:08.637039] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:23.240 [2024-04-15 01:42:08.637134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2049798 ] 00:07:23.240 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.240 [2024-04-15 01:42:08.698552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.240 [2024-04-15 01:42:08.788379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.614 01:42:10 -- accel/accel.sh@18 -- # out=' 00:07:24.614 SPDK Configuration: 00:07:24.614 Core mask: 0x1 00:07:24.614 00:07:24.614 Accel Perf Configuration: 00:07:24.614 Workload Type: xor 00:07:24.614 Source buffers: 2 00:07:24.614 Transfer size: 4096 bytes 00:07:24.614 Vector count 1 00:07:24.614 Module: software 00:07:24.614 Queue depth: 32 00:07:24.614 Allocate depth: 32 00:07:24.614 # threads/core: 1 00:07:24.614 Run time: 1 seconds 00:07:24.614 Verify: Yes 00:07:24.614 00:07:24.614 Running for 1 seconds... 00:07:24.614 00:07:24.614 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.614 ------------------------------------------------------------------------------------ 00:07:24.614 0,0 192224/s 750 MiB/s 0 0 00:07:24.614 ==================================================================================== 00:07:24.614 Total 192224/s 750 MiB/s 0 0' 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:24.614 01:42:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.614 01:42:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.614 01:42:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.614 01:42:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.614 01:42:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.614 01:42:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:24.614 01:42:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.614 01:42:10 -- accel/accel.sh@42 -- # jq -r . 00:07:24.614 [2024-04-15 01:42:10.033257] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:24.614 [2024-04-15 01:42:10.033346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050114 ] 00:07:24.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.614 [2024-04-15 01:42:10.098484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.614 [2024-04-15 01:42:10.190181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=0x1 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=xor 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=2 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=software 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=32 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=32 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=1 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val=Yes 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.614 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:24.614 01:42:10 -- accel/accel.sh@21 -- # val= 00:07:24.614 01:42:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:24.615 01:42:10 -- accel/accel.sh@20 -- # IFS=: 00:07:24.615 01:42:10 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@21 -- # val= 00:07:25.988 01:42:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # IFS=: 00:07:25.988 01:42:11 -- accel/accel.sh@20 -- # read -r var val 00:07:25.988 01:42:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:25.988 01:42:11 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:25.988 01:42:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.988 00:07:25.988 real 0m2.800s 00:07:25.988 user 0m2.495s 00:07:25.988 sys 0m0.296s 00:07:25.988 01:42:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.988 01:42:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.988 ************************************ 00:07:25.988 END TEST accel_xor 00:07:25.988 ************************************ 00:07:25.988 01:42:11 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:25.988 01:42:11 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:25.988 01:42:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:25.988 01:42:11 -- common/autotest_common.sh@10 -- # set +x 00:07:25.988 ************************************ 00:07:25.988 START TEST accel_xor 00:07:25.988 ************************************ 00:07:25.988 01:42:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:25.988 01:42:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:25.988 01:42:11 -- accel/accel.sh@17 -- # local accel_module 00:07:25.988 01:42:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:25.988 01:42:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:25.988 01:42:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.988 01:42:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.988 01:42:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.988 01:42:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.988 01:42:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.988 01:42:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.988 01:42:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.988 01:42:11 -- accel/accel.sh@42 -- # jq -r . 00:07:25.988 [2024-04-15 01:42:11.461564] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:25.988 [2024-04-15 01:42:11.461639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050323 ] 00:07:25.988 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.988 [2024-04-15 01:42:11.523278] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.988 [2024-04-15 01:42:11.613438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.363 01:42:12 -- accel/accel.sh@18 -- # out=' 00:07:27.363 SPDK Configuration: 00:07:27.363 Core mask: 0x1 00:07:27.363 00:07:27.363 Accel Perf Configuration: 00:07:27.363 Workload Type: xor 00:07:27.363 Source buffers: 3 00:07:27.363 Transfer size: 4096 bytes 00:07:27.363 Vector count 1 00:07:27.363 Module: software 00:07:27.363 Queue depth: 32 00:07:27.363 Allocate depth: 32 00:07:27.363 # threads/core: 1 00:07:27.363 Run time: 1 seconds 00:07:27.363 Verify: Yes 00:07:27.363 00:07:27.363 Running for 1 seconds... 00:07:27.363 00:07:27.363 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.363 ------------------------------------------------------------------------------------ 00:07:27.363 0,0 184096/s 719 MiB/s 0 0 00:07:27.363 ==================================================================================== 00:07:27.363 Total 184096/s 719 MiB/s 0 0' 00:07:27.363 01:42:12 -- accel/accel.sh@20 -- # IFS=: 00:07:27.363 01:42:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:27.363 01:42:12 -- accel/accel.sh@20 -- # read -r var val 00:07:27.363 01:42:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:27.363 01:42:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.363 01:42:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.363 01:42:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.363 01:42:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.363 01:42:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.363 01:42:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.363 01:42:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.363 01:42:12 -- accel/accel.sh@42 -- # jq -r . 00:07:27.363 [2024-04-15 01:42:12.863464] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:27.363 [2024-04-15 01:42:12.863529] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050463 ] 00:07:27.363 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.363 [2024-04-15 01:42:12.924479] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.627 [2024-04-15 01:42:13.011934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=0x1 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=xor 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=3 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=software 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=32 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=32 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=1 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val=Yes 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:27.627 01:42:13 -- accel/accel.sh@21 -- # val= 00:07:27.627 01:42:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # IFS=: 00:07:27.627 01:42:13 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@21 -- # val= 00:07:29.031 01:42:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # IFS=: 00:07:29.031 01:42:14 -- accel/accel.sh@20 -- # read -r var val 00:07:29.031 01:42:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.031 01:42:14 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:29.031 01:42:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.031 00:07:29.031 real 0m2.798s 00:07:29.031 user 0m2.505s 00:07:29.031 sys 0m0.285s 00:07:29.031 01:42:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.031 01:42:14 -- common/autotest_common.sh@10 -- # set +x 00:07:29.031 ************************************ 00:07:29.031 END TEST accel_xor 00:07:29.031 ************************************ 00:07:29.031 01:42:14 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:29.031 01:42:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:29.031 01:42:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.031 01:42:14 -- common/autotest_common.sh@10 -- # set +x 00:07:29.031 ************************************ 00:07:29.031 START TEST accel_dif_verify 00:07:29.031 ************************************ 00:07:29.031 01:42:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:29.031 01:42:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.031 01:42:14 -- accel/accel.sh@17 -- # local accel_module 00:07:29.031 01:42:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:29.031 01:42:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:29.031 01:42:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.031 01:42:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.031 01:42:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.031 01:42:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.031 01:42:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.031 01:42:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.031 01:42:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.031 01:42:14 -- accel/accel.sh@42 -- # jq -r . 00:07:29.031 [2024-04-15 01:42:14.283998] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:29.031 [2024-04-15 01:42:14.284094] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050625 ] 00:07:29.031 EAL: No free 2048 kB hugepages reported on node 1 00:07:29.031 [2024-04-15 01:42:14.346800] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.031 [2024-04-15 01:42:14.436793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.404 01:42:15 -- accel/accel.sh@18 -- # out=' 00:07:30.404 SPDK Configuration: 00:07:30.404 Core mask: 0x1 00:07:30.404 00:07:30.404 Accel Perf Configuration: 00:07:30.404 Workload Type: dif_verify 00:07:30.404 Vector size: 4096 bytes 00:07:30.404 Transfer size: 4096 bytes 00:07:30.404 Block size: 512 bytes 00:07:30.404 Metadata size: 8 bytes 00:07:30.404 Vector count 1 00:07:30.404 Module: software 00:07:30.404 Queue depth: 32 00:07:30.404 Allocate depth: 32 00:07:30.404 # threads/core: 1 00:07:30.404 Run time: 1 seconds 00:07:30.404 Verify: No 00:07:30.404 00:07:30.404 Running for 1 seconds... 00:07:30.404 00:07:30.404 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.404 ------------------------------------------------------------------------------------ 00:07:30.404 0,0 81920/s 325 MiB/s 0 0 00:07:30.404 ==================================================================================== 00:07:30.404 Total 81920/s 320 MiB/s 0 0' 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:30.404 01:42:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.404 01:42:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.404 01:42:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.404 01:42:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.404 01:42:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.404 01:42:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.404 01:42:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.404 01:42:15 -- accel/accel.sh@42 -- # jq -r . 00:07:30.404 [2024-04-15 01:42:15.686360] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:30.404 [2024-04-15 01:42:15.686451] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2050809 ] 00:07:30.404 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.404 [2024-04-15 01:42:15.749637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.404 [2024-04-15 01:42:15.839558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=0x1 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=dif_verify 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=software 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=32 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=32 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=1 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val=No 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.404 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.404 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.404 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.405 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:30.405 01:42:15 -- accel/accel.sh@21 -- # val= 00:07:30.405 01:42:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.405 01:42:15 -- accel/accel.sh@20 -- # IFS=: 00:07:30.405 01:42:15 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@21 -- # val= 00:07:31.779 01:42:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # IFS=: 00:07:31.779 01:42:17 -- accel/accel.sh@20 -- # read -r var val 00:07:31.779 01:42:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.779 01:42:17 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:31.779 01:42:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.779 00:07:31.779 real 0m2.808s 00:07:31.779 user 0m2.501s 00:07:31.779 sys 0m0.302s 00:07:31.779 01:42:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.779 01:42:17 -- common/autotest_common.sh@10 -- # set +x 00:07:31.779 ************************************ 00:07:31.779 END TEST accel_dif_verify 00:07:31.779 ************************************ 00:07:31.779 01:42:17 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:31.779 01:42:17 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:31.779 01:42:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.779 01:42:17 -- common/autotest_common.sh@10 -- # set +x 00:07:31.779 ************************************ 00:07:31.779 START TEST accel_dif_generate 00:07:31.779 ************************************ 00:07:31.779 01:42:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:31.779 01:42:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.779 01:42:17 -- accel/accel.sh@17 -- # local accel_module 00:07:31.779 01:42:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:31.779 01:42:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:31.779 01:42:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.779 01:42:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.779 01:42:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.779 01:42:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.779 01:42:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.779 01:42:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.779 01:42:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.779 01:42:17 -- accel/accel.sh@42 -- # jq -r . 00:07:31.779 [2024-04-15 01:42:17.114520] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:31.779 [2024-04-15 01:42:17.114594] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051046 ] 00:07:31.779 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.779 [2024-04-15 01:42:17.176860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.779 [2024-04-15 01:42:17.267380] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.154 01:42:18 -- accel/accel.sh@18 -- # out=' 00:07:33.154 SPDK Configuration: 00:07:33.154 Core mask: 0x1 00:07:33.154 00:07:33.154 Accel Perf Configuration: 00:07:33.154 Workload Type: dif_generate 00:07:33.154 Vector size: 4096 bytes 00:07:33.154 Transfer size: 4096 bytes 00:07:33.154 Block size: 512 bytes 00:07:33.154 Metadata size: 8 bytes 00:07:33.154 Vector count 1 00:07:33.154 Module: software 00:07:33.154 Queue depth: 32 00:07:33.154 Allocate depth: 32 00:07:33.154 # threads/core: 1 00:07:33.154 Run time: 1 seconds 00:07:33.154 Verify: No 00:07:33.154 00:07:33.154 Running for 1 seconds... 00:07:33.154 00:07:33.154 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.154 ------------------------------------------------------------------------------------ 00:07:33.154 0,0 96352/s 382 MiB/s 0 0 00:07:33.154 ==================================================================================== 00:07:33.154 Total 96352/s 376 MiB/s 0 0' 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:33.154 01:42:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.154 01:42:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.154 01:42:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.154 01:42:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.154 01:42:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.154 01:42:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.154 01:42:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.154 01:42:18 -- accel/accel.sh@42 -- # jq -r . 00:07:33.154 [2024-04-15 01:42:18.517002] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:33.154 [2024-04-15 01:42:18.517132] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051190 ] 00:07:33.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.154 [2024-04-15 01:42:18.577970] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.154 [2024-04-15 01:42:18.673587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val=0x1 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val=dif_generate 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.154 01:42:18 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.154 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.154 01:42:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.154 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val=software 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val=32 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val=32 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val=1 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val=No 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:33.155 01:42:18 -- accel/accel.sh@21 -- # val= 00:07:33.155 01:42:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # IFS=: 00:07:33.155 01:42:18 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@21 -- # val= 00:07:34.530 01:42:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # IFS=: 00:07:34.530 01:42:19 -- accel/accel.sh@20 -- # read -r var val 00:07:34.530 01:42:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.530 01:42:19 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:34.530 01:42:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.530 00:07:34.530 real 0m2.804s 00:07:34.530 user 0m2.507s 00:07:34.530 sys 0m0.292s 00:07:34.530 01:42:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.530 01:42:19 -- common/autotest_common.sh@10 -- # set +x 00:07:34.530 ************************************ 00:07:34.530 END TEST accel_dif_generate 00:07:34.530 ************************************ 00:07:34.530 01:42:19 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:34.530 01:42:19 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:34.530 01:42:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.530 01:42:19 -- common/autotest_common.sh@10 -- # set +x 00:07:34.530 ************************************ 00:07:34.530 START TEST accel_dif_generate_copy 00:07:34.530 ************************************ 00:07:34.530 01:42:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:34.530 01:42:19 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.530 01:42:19 -- accel/accel.sh@17 -- # local accel_module 00:07:34.530 01:42:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:34.530 01:42:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:34.530 01:42:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.530 01:42:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.530 01:42:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.530 01:42:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.530 01:42:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.530 01:42:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.530 01:42:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.530 01:42:19 -- accel/accel.sh@42 -- # jq -r . 00:07:34.530 [2024-04-15 01:42:19.941259] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:34.530 [2024-04-15 01:42:19.941345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051350 ] 00:07:34.530 EAL: No free 2048 kB hugepages reported on node 1 00:07:34.530 [2024-04-15 01:42:20.003948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.530 [2024-04-15 01:42:20.100432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.905 01:42:21 -- accel/accel.sh@18 -- # out=' 00:07:35.905 SPDK Configuration: 00:07:35.905 Core mask: 0x1 00:07:35.905 00:07:35.905 Accel Perf Configuration: 00:07:35.905 Workload Type: dif_generate_copy 00:07:35.905 Vector size: 4096 bytes 00:07:35.905 Transfer size: 4096 bytes 00:07:35.905 Vector count 1 00:07:35.905 Module: software 00:07:35.905 Queue depth: 32 00:07:35.905 Allocate depth: 32 00:07:35.905 # threads/core: 1 00:07:35.905 Run time: 1 seconds 00:07:35.905 Verify: No 00:07:35.905 00:07:35.905 Running for 1 seconds... 00:07:35.905 00:07:35.905 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.905 ------------------------------------------------------------------------------------ 00:07:35.905 0,0 75360/s 298 MiB/s 0 0 00:07:35.905 ==================================================================================== 00:07:35.905 Total 75360/s 294 MiB/s 0 0' 00:07:35.905 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:35.905 01:42:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:35.905 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:35.905 01:42:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:35.905 01:42:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.905 01:42:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.905 01:42:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.905 01:42:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.905 01:42:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.905 01:42:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.905 01:42:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.905 01:42:21 -- accel/accel.sh@42 -- # jq -r . 00:07:35.905 [2024-04-15 01:42:21.345193] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:35.905 [2024-04-15 01:42:21.345262] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051511 ] 00:07:35.905 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.905 [2024-04-15 01:42:21.406385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.905 [2024-04-15 01:42:21.498343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=0x1 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=software 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=32 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=32 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.163 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.163 01:42:21 -- accel/accel.sh@21 -- # val=1 00:07:36.163 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.164 01:42:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.164 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.164 01:42:21 -- accel/accel.sh@21 -- # val=No 00:07:36.164 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.164 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.164 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:36.164 01:42:21 -- accel/accel.sh@21 -- # val= 00:07:36.164 01:42:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # IFS=: 00:07:36.164 01:42:21 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@21 -- # val= 00:07:37.099 01:42:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # IFS=: 00:07:37.099 01:42:22 -- accel/accel.sh@20 -- # read -r var val 00:07:37.099 01:42:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.099 01:42:22 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:37.099 01:42:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.099 00:07:37.099 real 0m2.811s 00:07:37.099 user 0m2.513s 00:07:37.099 sys 0m0.289s 00:07:37.099 01:42:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.099 01:42:22 -- common/autotest_common.sh@10 -- # set +x 00:07:37.099 ************************************ 00:07:37.099 END TEST accel_dif_generate_copy 00:07:37.099 ************************************ 00:07:37.358 01:42:22 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:37.358 01:42:22 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.358 01:42:22 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:37.358 01:42:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.358 01:42:22 -- common/autotest_common.sh@10 -- # set +x 00:07:37.358 ************************************ 00:07:37.358 START TEST accel_comp 00:07:37.358 ************************************ 00:07:37.358 01:42:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.358 01:42:22 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.358 01:42:22 -- accel/accel.sh@17 -- # local accel_module 00:07:37.358 01:42:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.358 01:42:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:37.358 01:42:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.358 01:42:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.358 01:42:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.358 01:42:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.358 01:42:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.358 01:42:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.358 01:42:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.358 01:42:22 -- accel/accel.sh@42 -- # jq -r . 00:07:37.358 [2024-04-15 01:42:22.779602] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:37.358 [2024-04-15 01:42:22.779676] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051773 ] 00:07:37.358 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.358 [2024-04-15 01:42:22.842413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.358 [2024-04-15 01:42:22.932902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.732 01:42:24 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.732 00:07:38.732 SPDK Configuration: 00:07:38.732 Core mask: 0x1 00:07:38.732 00:07:38.732 Accel Perf Configuration: 00:07:38.732 Workload Type: compress 00:07:38.732 Transfer size: 4096 bytes 00:07:38.732 Vector count 1 00:07:38.732 Module: software 00:07:38.732 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.732 Queue depth: 32 00:07:38.732 Allocate depth: 32 00:07:38.732 # threads/core: 1 00:07:38.732 Run time: 1 seconds 00:07:38.732 Verify: No 00:07:38.732 00:07:38.732 Running for 1 seconds... 00:07:38.732 00:07:38.732 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.732 ------------------------------------------------------------------------------------ 00:07:38.732 0,0 32416/s 135 MiB/s 0 0 00:07:38.732 ==================================================================================== 00:07:38.732 Total 32416/s 126 MiB/s 0 0' 00:07:38.732 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.732 01:42:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.732 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.732 01:42:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.732 01:42:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.732 01:42:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.733 01:42:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.733 01:42:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.733 01:42:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.733 01:42:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.733 01:42:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.733 01:42:24 -- accel/accel.sh@42 -- # jq -r . 00:07:38.733 [2024-04-15 01:42:24.191823] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:38.733 [2024-04-15 01:42:24.191913] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2051911 ] 00:07:38.733 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.733 [2024-04-15 01:42:24.255538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.733 [2024-04-15 01:42:24.343249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=0x1 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=compress 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=software 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=32 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.991 01:42:24 -- accel/accel.sh@21 -- # val=32 00:07:38.991 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.991 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.992 01:42:24 -- accel/accel.sh@21 -- # val=1 00:07:38.992 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.992 01:42:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.992 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.992 01:42:24 -- accel/accel.sh@21 -- # val=No 00:07:38.992 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.992 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.992 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:38.992 01:42:24 -- accel/accel.sh@21 -- # val= 00:07:38.992 01:42:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # IFS=: 00:07:38.992 01:42:24 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@21 -- # val= 00:07:39.926 01:42:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # IFS=: 00:07:39.926 01:42:25 -- accel/accel.sh@20 -- # read -r var val 00:07:39.926 01:42:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.926 01:42:25 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:39.926 01:42:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.926 00:07:39.926 real 0m2.810s 00:07:39.926 user 0m2.506s 00:07:39.926 sys 0m0.298s 00:07:39.926 01:42:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.926 01:42:25 -- common/autotest_common.sh@10 -- # set +x 00:07:39.926 ************************************ 00:07:39.926 END TEST accel_comp 00:07:39.926 ************************************ 00:07:40.185 01:42:25 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.185 01:42:25 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:40.185 01:42:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.185 01:42:25 -- common/autotest_common.sh@10 -- # set +x 00:07:40.185 ************************************ 00:07:40.185 START TEST accel_decomp 00:07:40.185 ************************************ 00:07:40.185 01:42:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.185 01:42:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.185 01:42:25 -- accel/accel.sh@17 -- # local accel_module 00:07:40.185 01:42:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.185 01:42:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:40.185 01:42:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.185 01:42:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.185 01:42:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.185 01:42:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.185 01:42:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.185 01:42:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.185 01:42:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.185 01:42:25 -- accel/accel.sh@42 -- # jq -r . 00:07:40.185 [2024-04-15 01:42:25.612977] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:40.185 [2024-04-15 01:42:25.613067] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052076 ] 00:07:40.185 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.185 [2024-04-15 01:42:25.674354] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.185 [2024-04-15 01:42:25.764639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.559 01:42:26 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:41.560 00:07:41.560 SPDK Configuration: 00:07:41.560 Core mask: 0x1 00:07:41.560 00:07:41.560 Accel Perf Configuration: 00:07:41.560 Workload Type: decompress 00:07:41.560 Transfer size: 4096 bytes 00:07:41.560 Vector count 1 00:07:41.560 Module: software 00:07:41.560 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.560 Queue depth: 32 00:07:41.560 Allocate depth: 32 00:07:41.560 # threads/core: 1 00:07:41.560 Run time: 1 seconds 00:07:41.560 Verify: Yes 00:07:41.560 00:07:41.560 Running for 1 seconds... 00:07:41.560 00:07:41.560 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.560 ------------------------------------------------------------------------------------ 00:07:41.560 0,0 55552/s 102 MiB/s 0 0 00:07:41.560 ==================================================================================== 00:07:41.560 Total 55552/s 217 MiB/s 0 0' 00:07:41.560 01:42:26 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 01:42:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.560 01:42:26 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 01:42:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:41.560 01:42:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.560 01:42:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.560 01:42:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.560 01:42:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.560 01:42:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.560 01:42:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.560 01:42:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.560 01:42:26 -- accel/accel.sh@42 -- # jq -r . 00:07:41.560 [2024-04-15 01:42:27.013781] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:41.560 [2024-04-15 01:42:27.013856] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052245 ] 00:07:41.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.560 [2024-04-15 01:42:27.076741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.560 [2024-04-15 01:42:27.166696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.818 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.818 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.818 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.818 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.818 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.818 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.818 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=0x1 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=decompress 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=software 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=32 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=32 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=1 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val=Yes 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:41.819 01:42:27 -- accel/accel.sh@21 -- # val= 00:07:41.819 01:42:27 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # IFS=: 00:07:41.819 01:42:27 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@21 -- # val= 00:07:43.193 01:42:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # IFS=: 00:07:43.193 01:42:28 -- accel/accel.sh@20 -- # read -r var val 00:07:43.193 01:42:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:43.193 01:42:28 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:43.193 01:42:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:43.193 00:07:43.193 real 0m2.812s 00:07:43.193 user 0m2.528s 00:07:43.193 sys 0m0.277s 00:07:43.193 01:42:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.193 01:42:28 -- common/autotest_common.sh@10 -- # set +x 00:07:43.193 ************************************ 00:07:43.193 END TEST accel_decomp 00:07:43.193 ************************************ 00:07:43.193 01:42:28 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.193 01:42:28 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:43.194 01:42:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.194 01:42:28 -- common/autotest_common.sh@10 -- # set +x 00:07:43.194 ************************************ 00:07:43.194 START TEST accel_decmop_full 00:07:43.194 ************************************ 00:07:43.194 01:42:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.194 01:42:28 -- accel/accel.sh@16 -- # local accel_opc 00:07:43.194 01:42:28 -- accel/accel.sh@17 -- # local accel_module 00:07:43.194 01:42:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.194 01:42:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:43.194 01:42:28 -- accel/accel.sh@12 -- # build_accel_config 00:07:43.194 01:42:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:43.194 01:42:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:43.194 01:42:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:43.194 01:42:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:43.194 01:42:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:43.194 01:42:28 -- accel/accel.sh@41 -- # local IFS=, 00:07:43.194 01:42:28 -- accel/accel.sh@42 -- # jq -r . 00:07:43.194 [2024-04-15 01:42:28.447250] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:43.194 [2024-04-15 01:42:28.447324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052494 ] 00:07:43.194 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.194 [2024-04-15 01:42:28.509220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.194 [2024-04-15 01:42:28.600200] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.570 01:42:29 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:44.570 00:07:44.570 SPDK Configuration: 00:07:44.570 Core mask: 0x1 00:07:44.570 00:07:44.570 Accel Perf Configuration: 00:07:44.570 Workload Type: decompress 00:07:44.570 Transfer size: 111250 bytes 00:07:44.570 Vector count 1 00:07:44.570 Module: software 00:07:44.570 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.570 Queue depth: 32 00:07:44.570 Allocate depth: 32 00:07:44.570 # threads/core: 1 00:07:44.570 Run time: 1 seconds 00:07:44.570 Verify: Yes 00:07:44.570 00:07:44.570 Running for 1 seconds... 00:07:44.570 00:07:44.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.570 ------------------------------------------------------------------------------------ 00:07:44.570 0,0 3808/s 157 MiB/s 0 0 00:07:44.570 ==================================================================================== 00:07:44.570 Total 3808/s 404 MiB/s 0 0' 00:07:44.570 01:42:29 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.570 01:42:29 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:44.570 01:42:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.570 01:42:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.570 01:42:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.570 01:42:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.570 01:42:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.570 01:42:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.570 01:42:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.570 01:42:29 -- accel/accel.sh@42 -- # jq -r . 00:07:44.570 [2024-04-15 01:42:29.872991] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:44.570 [2024-04-15 01:42:29.873079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052638 ] 00:07:44.570 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.570 [2024-04-15 01:42:29.934224] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.570 [2024-04-15 01:42:30.030921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=0x1 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=decompress 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=software 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=32 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=32 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=1 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val=Yes 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:44.570 01:42:30 -- accel/accel.sh@21 -- # val= 00:07:44.570 01:42:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # IFS=: 00:07:44.570 01:42:30 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@21 -- # val= 00:07:45.972 01:42:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # IFS=: 00:07:45.972 01:42:31 -- accel/accel.sh@20 -- # read -r var val 00:07:45.972 01:42:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.972 01:42:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:45.972 01:42:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.972 00:07:45.972 real 0m2.848s 00:07:45.972 user 0m2.546s 00:07:45.972 sys 0m0.296s 00:07:45.972 01:42:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:45.972 01:42:31 -- common/autotest_common.sh@10 -- # set +x 00:07:45.972 ************************************ 00:07:45.972 END TEST accel_decmop_full 00:07:45.972 ************************************ 00:07:45.972 01:42:31 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.972 01:42:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:45.972 01:42:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:45.972 01:42:31 -- common/autotest_common.sh@10 -- # set +x 00:07:45.972 ************************************ 00:07:45.972 START TEST accel_decomp_mcore 00:07:45.972 ************************************ 00:07:45.972 01:42:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.972 01:42:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.972 01:42:31 -- accel/accel.sh@17 -- # local accel_module 00:07:45.972 01:42:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.972 01:42:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:45.972 01:42:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.972 01:42:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.972 01:42:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.972 01:42:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.972 01:42:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.972 01:42:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.972 01:42:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.972 01:42:31 -- accel/accel.sh@42 -- # jq -r . 00:07:45.972 [2024-04-15 01:42:31.319434] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:45.972 [2024-04-15 01:42:31.319509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2052799 ] 00:07:45.972 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.972 [2024-04-15 01:42:31.381669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.972 [2024-04-15 01:42:31.479633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.972 [2024-04-15 01:42:31.479686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.972 [2024-04-15 01:42:31.479754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.972 [2024-04-15 01:42:31.479756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.343 01:42:32 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:47.344 00:07:47.344 SPDK Configuration: 00:07:47.344 Core mask: 0xf 00:07:47.344 00:07:47.344 Accel Perf Configuration: 00:07:47.344 Workload Type: decompress 00:07:47.344 Transfer size: 4096 bytes 00:07:47.344 Vector count 1 00:07:47.344 Module: software 00:07:47.344 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.344 Queue depth: 32 00:07:47.344 Allocate depth: 32 00:07:47.344 # threads/core: 1 00:07:47.344 Run time: 1 seconds 00:07:47.344 Verify: Yes 00:07:47.344 00:07:47.344 Running for 1 seconds... 00:07:47.344 00:07:47.344 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:47.344 ------------------------------------------------------------------------------------ 00:07:47.344 0,0 56704/s 104 MiB/s 0 0 00:07:47.344 3,0 57408/s 105 MiB/s 0 0 00:07:47.344 2,0 57376/s 105 MiB/s 0 0 00:07:47.344 1,0 57280/s 105 MiB/s 0 0 00:07:47.344 ==================================================================================== 00:07:47.344 Total 228768/s 893 MiB/s 0 0' 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:47.344 01:42:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.344 01:42:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.344 01:42:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.344 01:42:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.344 01:42:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.344 01:42:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.344 01:42:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.344 01:42:32 -- accel/accel.sh@42 -- # jq -r . 00:07:47.344 [2024-04-15 01:42:32.721539] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:47.344 [2024-04-15 01:42:32.721623] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053068 ] 00:07:47.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.344 [2024-04-15 01:42:32.784527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.344 [2024-04-15 01:42:32.877863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.344 [2024-04-15 01:42:32.877920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.344 [2024-04-15 01:42:32.878038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.344 [2024-04-15 01:42:32.878040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=0xf 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=decompress 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=software 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@23 -- # accel_module=software 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=32 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=32 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=1 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val=Yes 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:47.344 01:42:32 -- accel/accel.sh@21 -- # val= 00:07:47.344 01:42:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # IFS=: 00:07:47.344 01:42:32 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@21 -- # val= 00:07:48.716 01:42:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # IFS=: 00:07:48.716 01:42:34 -- accel/accel.sh@20 -- # read -r var val 00:07:48.716 01:42:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.716 01:42:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:48.716 01:42:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.716 00:07:48.716 real 0m2.817s 00:07:48.716 user 0m9.376s 00:07:48.716 sys 0m0.306s 00:07:48.716 01:42:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:48.716 01:42:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.716 ************************************ 00:07:48.716 END TEST accel_decomp_mcore 00:07:48.716 ************************************ 00:07:48.716 01:42:34 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.716 01:42:34 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:48.716 01:42:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:48.716 01:42:34 -- common/autotest_common.sh@10 -- # set +x 00:07:48.716 ************************************ 00:07:48.716 START TEST accel_decomp_full_mcore 00:07:48.716 ************************************ 00:07:48.716 01:42:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.716 01:42:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.716 01:42:34 -- accel/accel.sh@17 -- # local accel_module 00:07:48.716 01:42:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.717 01:42:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:48.717 01:42:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.717 01:42:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.717 01:42:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.717 01:42:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.717 01:42:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.717 01:42:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.717 01:42:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.717 01:42:34 -- accel/accel.sh@42 -- # jq -r . 00:07:48.717 [2024-04-15 01:42:34.167947] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:48.717 [2024-04-15 01:42:34.168039] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053229 ] 00:07:48.717 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.717 [2024-04-15 01:42:34.231609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:48.717 [2024-04-15 01:42:34.322808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.717 [2024-04-15 01:42:34.322879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.717 [2024-04-15 01:42:34.322978] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:48.717 [2024-04-15 01:42:34.322981] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.090 01:42:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:50.090 00:07:50.090 SPDK Configuration: 00:07:50.090 Core mask: 0xf 00:07:50.090 00:07:50.090 Accel Perf Configuration: 00:07:50.090 Workload Type: decompress 00:07:50.090 Transfer size: 111250 bytes 00:07:50.090 Vector count 1 00:07:50.090 Module: software 00:07:50.090 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.090 Queue depth: 32 00:07:50.090 Allocate depth: 32 00:07:50.090 # threads/core: 1 00:07:50.090 Run time: 1 seconds 00:07:50.090 Verify: Yes 00:07:50.090 00:07:50.090 Running for 1 seconds... 00:07:50.090 00:07:50.090 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:50.090 ------------------------------------------------------------------------------------ 00:07:50.090 0,0 4256/s 175 MiB/s 0 0 00:07:50.090 3,0 4256/s 175 MiB/s 0 0 00:07:50.090 2,0 4256/s 175 MiB/s 0 0 00:07:50.090 1,0 4256/s 175 MiB/s 0 0 00:07:50.090 ==================================================================================== 00:07:50.091 Total 17024/s 1806 MiB/s 0 0' 00:07:50.091 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.091 01:42:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.091 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.091 01:42:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:50.091 01:42:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.091 01:42:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.091 01:42:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.091 01:42:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.091 01:42:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.091 01:42:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.091 01:42:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.091 01:42:35 -- accel/accel.sh@42 -- # jq -r . 00:07:50.091 [2024-04-15 01:42:35.590303] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:50.091 [2024-04-15 01:42:35.590414] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053376 ] 00:07:50.091 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.091 [2024-04-15 01:42:35.652991] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.349 [2024-04-15 01:42:35.746882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.349 [2024-04-15 01:42:35.746952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.349 [2024-04-15 01:42:35.747053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.349 [2024-04-15 01:42:35.747054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=0xf 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=decompress 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=software 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@23 -- # accel_module=software 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=32 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=32 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=1 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val=Yes 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:50.349 01:42:35 -- accel/accel.sh@21 -- # val= 00:07:50.349 01:42:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # IFS=: 00:07:50.349 01:42:35 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@21 -- # val= 00:07:51.722 01:42:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # IFS=: 00:07:51.722 01:42:36 -- accel/accel.sh@20 -- # read -r var val 00:07:51.722 01:42:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:51.722 01:42:36 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:51.722 01:42:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:51.722 00:07:51.722 real 0m2.849s 00:07:51.722 user 0m9.501s 00:07:51.722 sys 0m0.305s 00:07:51.722 01:42:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.722 01:42:36 -- common/autotest_common.sh@10 -- # set +x 00:07:51.722 ************************************ 00:07:51.722 END TEST accel_decomp_full_mcore 00:07:51.722 ************************************ 00:07:51.722 01:42:37 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.722 01:42:37 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:51.722 01:42:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:51.722 01:42:37 -- common/autotest_common.sh@10 -- # set +x 00:07:51.722 ************************************ 00:07:51.722 START TEST accel_decomp_mthread 00:07:51.722 ************************************ 00:07:51.722 01:42:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.722 01:42:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:51.722 01:42:37 -- accel/accel.sh@17 -- # local accel_module 00:07:51.722 01:42:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.722 01:42:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:51.722 01:42:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.722 01:42:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.722 01:42:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.722 01:42:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.722 01:42:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.722 01:42:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.722 01:42:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.722 01:42:37 -- accel/accel.sh@42 -- # jq -r . 00:07:51.722 [2024-04-15 01:42:37.040945] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:51.722 [2024-04-15 01:42:37.041023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053538 ] 00:07:51.722 EAL: No free 2048 kB hugepages reported on node 1 00:07:51.722 [2024-04-15 01:42:37.105189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.722 [2024-04-15 01:42:37.194675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.097 01:42:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:53.097 00:07:53.097 SPDK Configuration: 00:07:53.097 Core mask: 0x1 00:07:53.097 00:07:53.097 Accel Perf Configuration: 00:07:53.097 Workload Type: decompress 00:07:53.097 Transfer size: 4096 bytes 00:07:53.097 Vector count 1 00:07:53.097 Module: software 00:07:53.097 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.097 Queue depth: 32 00:07:53.097 Allocate depth: 32 00:07:53.097 # threads/core: 2 00:07:53.097 Run time: 1 seconds 00:07:53.097 Verify: Yes 00:07:53.097 00:07:53.097 Running for 1 seconds... 00:07:53.097 00:07:53.097 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:53.097 ------------------------------------------------------------------------------------ 00:07:53.097 0,1 28128/s 51 MiB/s 0 0 00:07:53.097 0,0 28032/s 51 MiB/s 0 0 00:07:53.097 ==================================================================================== 00:07:53.097 Total 56160/s 219 MiB/s 0 0' 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:53.097 01:42:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.097 01:42:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.097 01:42:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.097 01:42:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.097 01:42:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.097 01:42:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.097 01:42:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.097 01:42:38 -- accel/accel.sh@42 -- # jq -r . 00:07:53.097 [2024-04-15 01:42:38.446614] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:53.097 [2024-04-15 01:42:38.446692] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053796 ] 00:07:53.097 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.097 [2024-04-15 01:42:38.508228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.097 [2024-04-15 01:42:38.602614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=0x1 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=decompress 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=software 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=32 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=32 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=2 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val=Yes 00:07:53.097 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.097 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.097 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.098 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.098 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.098 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:53.098 01:42:38 -- accel/accel.sh@21 -- # val= 00:07:53.098 01:42:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.098 01:42:38 -- accel/accel.sh@20 -- # IFS=: 00:07:53.098 01:42:38 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@21 -- # val= 00:07:54.472 01:42:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # IFS=: 00:07:54.472 01:42:39 -- accel/accel.sh@20 -- # read -r var val 00:07:54.472 01:42:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:54.472 01:42:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:54.472 01:42:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:54.472 00:07:54.472 real 0m2.827s 00:07:54.472 user 0m2.528s 00:07:54.472 sys 0m0.292s 00:07:54.472 01:42:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.472 01:42:39 -- common/autotest_common.sh@10 -- # set +x 00:07:54.472 ************************************ 00:07:54.472 END TEST accel_decomp_mthread 00:07:54.472 ************************************ 00:07:54.472 01:42:39 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.472 01:42:39 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:54.472 01:42:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.472 01:42:39 -- common/autotest_common.sh@10 -- # set +x 00:07:54.472 ************************************ 00:07:54.472 START TEST accel_deomp_full_mthread 00:07:54.472 ************************************ 00:07:54.472 01:42:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.472 01:42:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:54.472 01:42:39 -- accel/accel.sh@17 -- # local accel_module 00:07:54.472 01:42:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.472 01:42:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:54.472 01:42:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.472 01:42:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.472 01:42:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.472 01:42:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.472 01:42:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.472 01:42:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.472 01:42:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.473 01:42:39 -- accel/accel.sh@42 -- # jq -r . 00:07:54.473 [2024-04-15 01:42:39.893945] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:54.473 [2024-04-15 01:42:39.894026] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2053962 ] 00:07:54.473 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.473 [2024-04-15 01:42:39.960714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.473 [2024-04-15 01:42:40.057075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.847 01:42:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:55.847 00:07:55.847 SPDK Configuration: 00:07:55.847 Core mask: 0x1 00:07:55.847 00:07:55.847 Accel Perf Configuration: 00:07:55.847 Workload Type: decompress 00:07:55.847 Transfer size: 111250 bytes 00:07:55.847 Vector count 1 00:07:55.847 Module: software 00:07:55.847 File Name: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:55.847 Queue depth: 32 00:07:55.847 Allocate depth: 32 00:07:55.847 # threads/core: 2 00:07:55.847 Run time: 1 seconds 00:07:55.847 Verify: Yes 00:07:55.847 00:07:55.847 Running for 1 seconds... 00:07:55.847 00:07:55.847 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:55.847 ------------------------------------------------------------------------------------ 00:07:55.847 0,1 1952/s 80 MiB/s 0 0 00:07:55.847 0,0 1920/s 79 MiB/s 0 0 00:07:55.847 ==================================================================================== 00:07:55.847 Total 3872/s 410 MiB/s 0 0' 00:07:55.847 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:55.847 01:42:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.847 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:55.847 01:42:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:55.847 01:42:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:55.847 01:42:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:55.847 01:42:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:55.847 01:42:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:55.847 01:42:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:55.847 01:42:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:55.847 01:42:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:55.847 01:42:41 -- accel/accel.sh@42 -- # jq -r . 00:07:55.847 [2024-04-15 01:42:41.351756] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:55.847 [2024-04-15 01:42:41.351837] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054100 ] 00:07:55.847 EAL: No free 2048 kB hugepages reported on node 1 00:07:55.847 [2024-04-15 01:42:41.412609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.106 [2024-04-15 01:42:41.506806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=0x1 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=decompress 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=software 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=32 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=32 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=2 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val=Yes 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:56.106 01:42:41 -- accel/accel.sh@21 -- # val= 00:07:56.106 01:42:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # IFS=: 00:07:56.106 01:42:41 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@21 -- # val= 00:07:57.480 01:42:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # IFS=: 00:07:57.480 01:42:42 -- accel/accel.sh@20 -- # read -r var val 00:07:57.480 01:42:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:57.480 01:42:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:57.480 01:42:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:57.480 00:07:57.480 real 0m2.903s 00:07:57.480 user 0m2.593s 00:07:57.480 sys 0m0.304s 00:07:57.480 01:42:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.480 01:42:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.480 ************************************ 00:07:57.480 END TEST accel_deomp_full_mthread 00:07:57.480 ************************************ 00:07:57.480 01:42:42 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:57.480 01:42:42 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.480 01:42:42 -- accel/accel.sh@129 -- # build_accel_config 00:07:57.480 01:42:42 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:57.480 01:42:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.480 01:42:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.480 01:42:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.480 01:42:42 -- common/autotest_common.sh@10 -- # set +x 00:07:57.480 01:42:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.480 01:42:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.480 01:42:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.481 01:42:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.481 01:42:42 -- accel/accel.sh@42 -- # jq -r . 00:07:57.481 ************************************ 00:07:57.481 START TEST accel_dif_functional_tests 00:07:57.481 ************************************ 00:07:57.481 01:42:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:57.481 [2024-04-15 01:42:42.844481] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:57.481 [2024-04-15 01:42:42.844565] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054326 ] 00:07:57.481 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.481 [2024-04-15 01:42:42.912213] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:57.481 [2024-04-15 01:42:43.008528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:57.481 [2024-04-15 01:42:43.008584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:57.481 [2024-04-15 01:42:43.008587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.481 00:07:57.481 00:07:57.481 CUnit - A unit testing framework for C - Version 2.1-3 00:07:57.481 http://cunit.sourceforge.net/ 00:07:57.481 00:07:57.481 00:07:57.481 Suite: accel_dif 00:07:57.481 Test: verify: DIF generated, GUARD check ...passed 00:07:57.481 Test: verify: DIF generated, APPTAG check ...passed 00:07:57.481 Test: verify: DIF generated, REFTAG check ...passed 00:07:57.481 Test: verify: DIF not generated, GUARD check ...[2024-04-15 01:42:43.103136] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.481 [2024-04-15 01:42:43.103201] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:57.481 passed 00:07:57.481 Test: verify: DIF not generated, APPTAG check ...[2024-04-15 01:42:43.103245] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.481 [2024-04-15 01:42:43.103275] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:57.481 passed 00:07:57.481 Test: verify: DIF not generated, REFTAG check ...[2024-04-15 01:42:43.103308] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.481 [2024-04-15 01:42:43.103336] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:57.481 passed 00:07:57.481 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:57.481 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-15 01:42:43.103403] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:57.481 passed 00:07:57.481 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:57.481 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:57.481 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:57.481 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-15 01:42:43.103556] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:57.481 passed 00:07:57.481 Test: generate copy: DIF generated, GUARD check ...passed 00:07:57.481 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:57.481 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:57.481 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:57.481 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:57.481 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:57.481 Test: generate copy: iovecs-len validate ...[2024-04-15 01:42:43.103806] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:57.481 passed 00:07:57.481 Test: generate copy: buffer alignment validate ...passed 00:07:57.481 00:07:57.481 Run Summary: Type Total Ran Passed Failed Inactive 00:07:57.481 suites 1 1 n/a 0 0 00:07:57.481 tests 20 20 20 0 0 00:07:57.481 asserts 204 204 204 0 n/a 00:07:57.481 00:07:57.481 Elapsed time = 0.003 seconds 00:07:57.741 00:07:57.741 real 0m0.521s 00:07:57.741 user 0m0.809s 00:07:57.741 sys 0m0.182s 00:07:57.741 01:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.741 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 END TEST accel_dif_functional_tests 00:07:57.741 ************************************ 00:07:57.741 00:07:57.741 real 0m59.769s 00:07:57.741 user 1m7.480s 00:07:57.741 sys 0m7.235s 00:07:57.741 01:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.741 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 END TEST accel 00:07:57.741 ************************************ 00:07:57.741 01:42:43 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:57.741 01:42:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.741 01:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.741 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:57.741 ************************************ 00:07:57.741 START TEST accel_rpc 00:07:57.741 ************************************ 00:07:57.741 01:42:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:58.000 * Looking for test storage... 00:07:58.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:58.000 01:42:43 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:58.000 01:42:43 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2054450 00:07:58.000 01:42:43 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:58.000 01:42:43 -- accel/accel_rpc.sh@15 -- # waitforlisten 2054450 00:07:58.000 01:42:43 -- common/autotest_common.sh@819 -- # '[' -z 2054450 ']' 00:07:58.000 01:42:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.000 01:42:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.000 01:42:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.000 01:42:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.000 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.000 [2024-04-15 01:42:43.477589] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:58.000 [2024-04-15 01:42:43.477665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054450 ] 00:07:58.000 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.000 [2024-04-15 01:42:43.539710] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.000 [2024-04-15 01:42:43.631687] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:58.000 [2024-04-15 01:42:43.631877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.258 01:42:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:58.258 01:42:43 -- common/autotest_common.sh@852 -- # return 0 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:58.258 01:42:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.258 01:42:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.258 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 ************************************ 00:07:58.258 START TEST accel_assign_opcode 00:07:58.258 ************************************ 00:07:58.258 01:42:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:58.258 01:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.258 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.258 [2024-04-15 01:42:43.712520] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:58.258 01:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.258 01:42:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:58.259 01:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.259 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.259 [2024-04-15 01:42:43.720536] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:58.259 01:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.259 01:42:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:58.259 01:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.259 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.517 01:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.517 01:42:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:58.517 01:42:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:58.517 01:42:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:58.517 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.517 01:42:43 -- accel/accel_rpc.sh@42 -- # grep software 00:07:58.517 01:42:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:58.517 software 00:07:58.517 00:07:58.517 real 0m0.289s 00:07:58.517 user 0m0.034s 00:07:58.517 sys 0m0.007s 00:07:58.517 01:42:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.517 01:42:43 -- common/autotest_common.sh@10 -- # set +x 00:07:58.517 ************************************ 00:07:58.517 END TEST accel_assign_opcode 00:07:58.517 ************************************ 00:07:58.517 01:42:44 -- accel/accel_rpc.sh@55 -- # killprocess 2054450 00:07:58.517 01:42:44 -- common/autotest_common.sh@926 -- # '[' -z 2054450 ']' 00:07:58.517 01:42:44 -- common/autotest_common.sh@930 -- # kill -0 2054450 00:07:58.517 01:42:44 -- common/autotest_common.sh@931 -- # uname 00:07:58.517 01:42:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:58.517 01:42:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2054450 00:07:58.517 01:42:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:58.517 01:42:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:58.517 01:42:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2054450' 00:07:58.517 killing process with pid 2054450 00:07:58.517 01:42:44 -- common/autotest_common.sh@945 -- # kill 2054450 00:07:58.517 01:42:44 -- common/autotest_common.sh@950 -- # wait 2054450 00:07:59.084 00:07:59.084 real 0m1.075s 00:07:59.084 user 0m1.015s 00:07:59.084 sys 0m0.408s 00:07:59.084 01:42:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.084 01:42:44 -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 ************************************ 00:07:59.084 END TEST accel_rpc 00:07:59.084 ************************************ 00:07:59.084 01:42:44 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:59.084 01:42:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.084 01:42:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.084 01:42:44 -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 ************************************ 00:07:59.084 START TEST app_cmdline 00:07:59.084 ************************************ 00:07:59.084 01:42:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:59.084 * Looking for test storage... 00:07:59.084 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:59.084 01:42:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:59.084 01:42:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2054653 00:07:59.084 01:42:44 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:59.084 01:42:44 -- app/cmdline.sh@18 -- # waitforlisten 2054653 00:07:59.084 01:42:44 -- common/autotest_common.sh@819 -- # '[' -z 2054653 ']' 00:07:59.084 01:42:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.084 01:42:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:59.084 01:42:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.084 01:42:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:59.084 01:42:44 -- common/autotest_common.sh@10 -- # set +x 00:07:59.084 [2024-04-15 01:42:44.577357] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:07:59.084 [2024-04-15 01:42:44.577434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2054653 ] 00:07:59.084 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.084 [2024-04-15 01:42:44.634054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.084 [2024-04-15 01:42:44.716639] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:59.084 [2024-04-15 01:42:44.716792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.019 01:42:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:00.019 01:42:45 -- common/autotest_common.sh@852 -- # return 0 00:08:00.019 01:42:45 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:00.277 { 00:08:00.277 "version": "SPDK v24.01.1-pre git sha1 3b33f4333", 00:08:00.277 "fields": { 00:08:00.277 "major": 24, 00:08:00.277 "minor": 1, 00:08:00.277 "patch": 1, 00:08:00.277 "suffix": "-pre", 00:08:00.277 "commit": "3b33f4333" 00:08:00.277 } 00:08:00.277 } 00:08:00.277 01:42:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:00.277 01:42:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:00.277 01:42:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:00.277 01:42:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:00.277 01:42:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:00.277 01:42:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:00.277 01:42:45 -- common/autotest_common.sh@10 -- # set +x 00:08:00.277 01:42:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:00.277 01:42:45 -- app/cmdline.sh@26 -- # sort 00:08:00.277 01:42:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:00.277 01:42:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:00.277 01:42:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:00.277 01:42:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.277 01:42:45 -- common/autotest_common.sh@640 -- # local es=0 00:08:00.277 01:42:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.277 01:42:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.277 01:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.277 01:42:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.277 01:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.277 01:42:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.277 01:42:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:00.277 01:42:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:00.277 01:42:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:00.277 01:42:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:00.536 request: 00:08:00.536 { 00:08:00.536 "method": "env_dpdk_get_mem_stats", 00:08:00.536 "req_id": 1 00:08:00.536 } 00:08:00.536 Got JSON-RPC error response 00:08:00.536 response: 00:08:00.536 { 00:08:00.536 "code": -32601, 00:08:00.536 "message": "Method not found" 00:08:00.536 } 00:08:00.536 01:42:46 -- common/autotest_common.sh@643 -- # es=1 00:08:00.536 01:42:46 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:00.536 01:42:46 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:00.536 01:42:46 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:00.536 01:42:46 -- app/cmdline.sh@1 -- # killprocess 2054653 00:08:00.536 01:42:46 -- common/autotest_common.sh@926 -- # '[' -z 2054653 ']' 00:08:00.536 01:42:46 -- common/autotest_common.sh@930 -- # kill -0 2054653 00:08:00.536 01:42:46 -- common/autotest_common.sh@931 -- # uname 00:08:00.536 01:42:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:00.536 01:42:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2054653 00:08:00.536 01:42:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:00.536 01:42:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:00.536 01:42:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2054653' 00:08:00.536 killing process with pid 2054653 00:08:00.536 01:42:46 -- common/autotest_common.sh@945 -- # kill 2054653 00:08:00.536 01:42:46 -- common/autotest_common.sh@950 -- # wait 2054653 00:08:01.102 00:08:01.102 real 0m2.059s 00:08:01.102 user 0m2.615s 00:08:01.102 sys 0m0.520s 00:08:01.102 01:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.102 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.102 ************************************ 00:08:01.102 END TEST app_cmdline 00:08:01.102 ************************************ 00:08:01.102 01:42:46 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:01.102 01:42:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:01.102 01:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.102 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.102 ************************************ 00:08:01.102 START TEST version 00:08:01.102 ************************************ 00:08:01.102 01:42:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:08:01.102 * Looking for test storage... 00:08:01.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:01.102 01:42:46 -- app/version.sh@17 -- # get_header_version major 00:08:01.102 01:42:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.102 01:42:46 -- app/version.sh@14 -- # cut -f2 00:08:01.102 01:42:46 -- app/version.sh@14 -- # tr -d '"' 00:08:01.102 01:42:46 -- app/version.sh@17 -- # major=24 00:08:01.102 01:42:46 -- app/version.sh@18 -- # get_header_version minor 00:08:01.102 01:42:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.102 01:42:46 -- app/version.sh@14 -- # cut -f2 00:08:01.102 01:42:46 -- app/version.sh@14 -- # tr -d '"' 00:08:01.102 01:42:46 -- app/version.sh@18 -- # minor=1 00:08:01.102 01:42:46 -- app/version.sh@19 -- # get_header_version patch 00:08:01.102 01:42:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.102 01:42:46 -- app/version.sh@14 -- # cut -f2 00:08:01.102 01:42:46 -- app/version.sh@14 -- # tr -d '"' 00:08:01.102 01:42:46 -- app/version.sh@19 -- # patch=1 00:08:01.102 01:42:46 -- app/version.sh@20 -- # get_header_version suffix 00:08:01.102 01:42:46 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:08:01.102 01:42:46 -- app/version.sh@14 -- # cut -f2 00:08:01.102 01:42:46 -- app/version.sh@14 -- # tr -d '"' 00:08:01.102 01:42:46 -- app/version.sh@20 -- # suffix=-pre 00:08:01.102 01:42:46 -- app/version.sh@22 -- # version=24.1 00:08:01.103 01:42:46 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:01.103 01:42:46 -- app/version.sh@25 -- # version=24.1.1 00:08:01.103 01:42:46 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:01.103 01:42:46 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:01.103 01:42:46 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:01.103 01:42:46 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:01.103 01:42:46 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:01.103 00:08:01.103 real 0m0.102s 00:08:01.103 user 0m0.054s 00:08:01.103 sys 0m0.069s 00:08:01.103 01:42:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.103 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.103 ************************************ 00:08:01.103 END TEST version 00:08:01.103 ************************************ 00:08:01.103 01:42:46 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@204 -- # uname -s 00:08:01.103 01:42:46 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:08:01.103 01:42:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:01.103 01:42:46 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:08:01.103 01:42:46 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:01.103 01:42:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:01.103 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.103 01:42:46 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:01.103 01:42:46 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:08:01.103 01:42:46 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.103 01:42:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:01.103 01:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.103 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.103 ************************************ 00:08:01.103 START TEST nvmf_tcp 00:08:01.103 ************************************ 00:08:01.103 01:42:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:01.362 * Looking for test storage... 00:08:01.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.362 01:42:46 -- nvmf/common.sh@7 -- # uname -s 00:08:01.362 01:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.362 01:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.362 01:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.362 01:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.362 01:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.362 01:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.362 01:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.362 01:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.362 01:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.362 01:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.362 01:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.362 01:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.362 01:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.362 01:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.362 01:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.362 01:42:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.362 01:42:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.362 01:42:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.362 01:42:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.362 01:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.362 01:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.362 01:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.362 01:42:46 -- paths/export.sh@5 -- # export PATH 00:08:01.362 01:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.362 01:42:46 -- nvmf/common.sh@46 -- # : 0 00:08:01.362 01:42:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:01.362 01:42:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:01.362 01:42:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:01.362 01:42:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.362 01:42:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.362 01:42:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:01.362 01:42:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:01.362 01:42:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:01.362 01:42:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.362 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:01.362 01:42:46 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.362 01:42:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:01.362 01:42:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.362 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.362 ************************************ 00:08:01.362 START TEST nvmf_example 00:08:01.362 ************************************ 00:08:01.362 01:42:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:08:01.362 * Looking for test storage... 00:08:01.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:01.362 01:42:46 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:01.362 01:42:46 -- nvmf/common.sh@7 -- # uname -s 00:08:01.362 01:42:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:01.362 01:42:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:01.362 01:42:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:01.362 01:42:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:01.362 01:42:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:01.362 01:42:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:01.362 01:42:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:01.362 01:42:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:01.362 01:42:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:01.362 01:42:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:01.362 01:42:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.362 01:42:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:01.362 01:42:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:01.362 01:42:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:01.362 01:42:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:01.362 01:42:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:01.362 01:42:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:01.362 01:42:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:01.362 01:42:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:01.363 01:42:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.363 01:42:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.363 01:42:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.363 01:42:46 -- paths/export.sh@5 -- # export PATH 00:08:01.363 01:42:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:01.363 01:42:46 -- nvmf/common.sh@46 -- # : 0 00:08:01.363 01:42:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:01.363 01:42:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:01.363 01:42:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:01.363 01:42:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:01.363 01:42:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:01.363 01:42:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:01.363 01:42:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:01.363 01:42:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:01.363 01:42:46 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:01.363 01:42:46 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:01.363 01:42:46 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:01.363 01:42:46 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:01.363 01:42:46 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:01.363 01:42:46 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:01.363 01:42:46 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:01.363 01:42:46 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:01.363 01:42:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:01.363 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:01.363 01:42:46 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:01.363 01:42:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:01.363 01:42:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:01.363 01:42:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:01.363 01:42:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:01.363 01:42:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:01.363 01:42:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.363 01:42:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:01.363 01:42:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:01.363 01:42:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:01.363 01:42:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:01.363 01:42:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:01.363 01:42:46 -- common/autotest_common.sh@10 -- # set +x 00:08:03.292 01:42:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:03.292 01:42:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:03.292 01:42:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:03.292 01:42:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:03.292 01:42:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:03.292 01:42:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:03.292 01:42:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:03.292 01:42:48 -- nvmf/common.sh@294 -- # net_devs=() 00:08:03.292 01:42:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:03.292 01:42:48 -- nvmf/common.sh@295 -- # e810=() 00:08:03.292 01:42:48 -- nvmf/common.sh@295 -- # local -ga e810 00:08:03.292 01:42:48 -- nvmf/common.sh@296 -- # x722=() 00:08:03.292 01:42:48 -- nvmf/common.sh@296 -- # local -ga x722 00:08:03.292 01:42:48 -- nvmf/common.sh@297 -- # mlx=() 00:08:03.292 01:42:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:03.292 01:42:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.292 01:42:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.292 01:42:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:03.292 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:03.292 01:42:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:03.292 01:42:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:03.292 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:03.292 01:42:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.292 01:42:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.292 01:42:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.292 01:42:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:03.292 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:03.292 01:42:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:03.292 01:42:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.292 01:42:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.292 01:42:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:03.292 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:03.292 01:42:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:03.292 01:42:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:03.292 01:42:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.292 01:42:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.292 01:42:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:03.292 01:42:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.292 01:42:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.292 01:42:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:03.292 01:42:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.292 01:42:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.292 01:42:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:03.292 01:42:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:03.292 01:42:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.292 01:42:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.292 01:42:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.292 01:42:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.292 01:42:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:03.292 01:42:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.292 01:42:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.292 01:42:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.292 01:42:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:03.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.278 ms 00:08:03.292 00:08:03.292 --- 10.0.0.2 ping statistics --- 00:08:03.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.292 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:08:03.292 01:42:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:08:03.292 00:08:03.292 --- 10.0.0.1 ping statistics --- 00:08:03.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.292 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:03.292 01:42:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.292 01:42:48 -- nvmf/common.sh@410 -- # return 0 00:08:03.292 01:42:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:03.292 01:42:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.292 01:42:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:03.292 01:42:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.293 01:42:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:03.293 01:42:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:03.552 01:42:48 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:03.552 01:42:48 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:03.552 01:42:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:03.552 01:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:03.552 01:42:48 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:08:03.552 01:42:48 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:08:03.552 01:42:48 -- target/nvmf_example.sh@34 -- # nvmfpid=2056695 00:08:03.552 01:42:48 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:03.552 01:42:48 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:03.552 01:42:48 -- target/nvmf_example.sh@36 -- # waitforlisten 2056695 00:08:03.552 01:42:48 -- common/autotest_common.sh@819 -- # '[' -z 2056695 ']' 00:08:03.552 01:42:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.552 01:42:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:03.552 01:42:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.552 01:42:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:03.552 01:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:03.552 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.486 01:42:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:04.486 01:42:49 -- common/autotest_common.sh@852 -- # return 0 00:08:04.486 01:42:49 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:04.486 01:42:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:04.486 01:42:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:49 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:04.486 01:42:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.486 01:42:49 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.486 01:42:50 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:04.486 01:42:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.486 01:42:50 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.486 01:42:50 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:04.486 01:42:50 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:04.486 01:42:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.486 01:42:50 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.486 01:42:50 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:04.486 01:42:50 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:04.486 01:42:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.486 01:42:50 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.486 01:42:50 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.486 01:42:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:04.486 01:42:50 -- common/autotest_common.sh@10 -- # set +x 00:08:04.486 01:42:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:04.486 01:42:50 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:04.486 01:42:50 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:04.486 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.683 Initializing NVMe Controllers 00:08:16.683 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.683 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:16.683 Initialization complete. Launching workers. 00:08:16.683 ======================================================== 00:08:16.683 Latency(us) 00:08:16.683 Device Information : IOPS MiB/s Average min max 00:08:16.683 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12318.57 48.12 5194.90 1062.56 16821.55 00:08:16.683 ======================================================== 00:08:16.683 Total : 12318.57 48.12 5194.90 1062.56 16821.55 00:08:16.683 00:08:16.683 01:43:00 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:16.683 01:43:00 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:16.683 01:43:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:16.683 01:43:00 -- nvmf/common.sh@116 -- # sync 00:08:16.683 01:43:00 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:16.683 01:43:00 -- nvmf/common.sh@119 -- # set +e 00:08:16.683 01:43:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:16.683 01:43:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:16.683 rmmod nvme_tcp 00:08:16.683 rmmod nvme_fabrics 00:08:16.683 rmmod nvme_keyring 00:08:16.683 01:43:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:16.683 01:43:00 -- nvmf/common.sh@123 -- # set -e 00:08:16.683 01:43:00 -- nvmf/common.sh@124 -- # return 0 00:08:16.683 01:43:00 -- nvmf/common.sh@477 -- # '[' -n 2056695 ']' 00:08:16.683 01:43:00 -- nvmf/common.sh@478 -- # killprocess 2056695 00:08:16.683 01:43:00 -- common/autotest_common.sh@926 -- # '[' -z 2056695 ']' 00:08:16.683 01:43:00 -- common/autotest_common.sh@930 -- # kill -0 2056695 00:08:16.683 01:43:00 -- common/autotest_common.sh@931 -- # uname 00:08:16.683 01:43:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:16.683 01:43:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2056695 00:08:16.683 01:43:00 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:16.683 01:43:00 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:16.683 01:43:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2056695' 00:08:16.683 killing process with pid 2056695 00:08:16.683 01:43:00 -- common/autotest_common.sh@945 -- # kill 2056695 00:08:16.683 01:43:00 -- common/autotest_common.sh@950 -- # wait 2056695 00:08:16.683 nvmf threads initialize successfully 00:08:16.683 bdev subsystem init successfully 00:08:16.683 created a nvmf target service 00:08:16.683 create targets's poll groups done 00:08:16.683 all subsystems of target started 00:08:16.684 nvmf target is running 00:08:16.684 all subsystems of target stopped 00:08:16.684 destroy targets's poll groups done 00:08:16.684 destroyed the nvmf target service 00:08:16.684 bdev subsystem finish successfully 00:08:16.684 nvmf threads destroy successfully 00:08:16.684 01:43:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:16.684 01:43:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:16.684 01:43:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:16.684 01:43:00 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.684 01:43:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:16.684 01:43:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.684 01:43:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.684 01:43:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.942 01:43:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:16.942 01:43:02 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:16.942 01:43:02 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:16.942 01:43:02 -- common/autotest_common.sh@10 -- # set +x 00:08:17.203 00:08:17.203 real 0m15.822s 00:08:17.203 user 0m45.124s 00:08:17.203 sys 0m3.153s 00:08:17.203 01:43:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:17.203 01:43:02 -- common/autotest_common.sh@10 -- # set +x 00:08:17.203 ************************************ 00:08:17.203 END TEST nvmf_example 00:08:17.203 ************************************ 00:08:17.203 01:43:02 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:17.203 01:43:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:17.203 01:43:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:17.203 01:43:02 -- common/autotest_common.sh@10 -- # set +x 00:08:17.203 ************************************ 00:08:17.203 START TEST nvmf_filesystem 00:08:17.203 ************************************ 00:08:17.203 01:43:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:17.203 * Looking for test storage... 00:08:17.203 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.203 01:43:02 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:08:17.203 01:43:02 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:17.203 01:43:02 -- common/autotest_common.sh@34 -- # set -e 00:08:17.203 01:43:02 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:17.203 01:43:02 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:17.203 01:43:02 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:17.203 01:43:02 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:08:17.203 01:43:02 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:17.203 01:43:02 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:17.203 01:43:02 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:17.203 01:43:02 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:17.203 01:43:02 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:17.203 01:43:02 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:17.203 01:43:02 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:17.203 01:43:02 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:17.203 01:43:02 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:17.203 01:43:02 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:17.203 01:43:02 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:17.203 01:43:02 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:17.203 01:43:02 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:17.203 01:43:02 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:17.203 01:43:02 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:17.203 01:43:02 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:17.203 01:43:02 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:17.203 01:43:02 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:17.203 01:43:02 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.203 01:43:02 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:17.203 01:43:02 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:17.203 01:43:02 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:17.203 01:43:02 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:17.203 01:43:02 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:17.203 01:43:02 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:17.203 01:43:02 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:17.203 01:43:02 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:17.203 01:43:02 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:17.203 01:43:02 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:17.203 01:43:02 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:17.203 01:43:02 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:17.203 01:43:02 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:17.203 01:43:02 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:17.203 01:43:02 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:17.203 01:43:02 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:17.203 01:43:02 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.203 01:43:02 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:17.203 01:43:02 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:17.203 01:43:02 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:17.203 01:43:02 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:17.203 01:43:02 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:17.203 01:43:02 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:17.203 01:43:02 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:17.203 01:43:02 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:17.203 01:43:02 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:17.203 01:43:02 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:17.203 01:43:02 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:17.203 01:43:02 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:17.203 01:43:02 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:17.203 01:43:02 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:17.203 01:43:02 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=y 00:08:17.203 01:43:02 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:17.203 01:43:02 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:17.203 01:43:02 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:17.203 01:43:02 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:17.203 01:43:02 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:17.203 01:43:02 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:17.203 01:43:02 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:17.203 01:43:02 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:17.203 01:43:02 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=n 00:08:17.203 01:43:02 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.203 01:43:02 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:17.203 01:43:02 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:17.203 01:43:02 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:17.203 01:43:02 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:17.203 01:43:02 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:17.203 01:43:02 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:17.203 01:43:02 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:17.203 01:43:02 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:17.203 01:43:02 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:17.203 01:43:02 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:17.203 01:43:02 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:17.203 01:43:02 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:17.203 01:43:02 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:17.204 01:43:02 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:17.204 01:43:02 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:17.204 01:43:02 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:17.204 01:43:02 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:17.204 01:43:02 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:17.204 01:43:02 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.204 01:43:02 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:08:17.204 01:43:02 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.204 01:43:02 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:08:17.204 01:43:02 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.204 01:43:02 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.204 01:43:02 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:08:17.204 01:43:02 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.204 01:43:02 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:17.204 01:43:02 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:17.204 01:43:02 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:17.204 01:43:02 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:17.204 01:43:02 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:17.204 01:43:02 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:17.204 01:43:02 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:08:17.204 01:43:02 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:17.204 #define SPDK_CONFIG_H 00:08:17.204 #define SPDK_CONFIG_APPS 1 00:08:17.204 #define SPDK_CONFIG_ARCH native 00:08:17.204 #undef SPDK_CONFIG_ASAN 00:08:17.204 #undef SPDK_CONFIG_AVAHI 00:08:17.204 #undef SPDK_CONFIG_CET 00:08:17.204 #define SPDK_CONFIG_COVERAGE 1 00:08:17.204 #define SPDK_CONFIG_CROSS_PREFIX 00:08:17.204 #undef SPDK_CONFIG_CRYPTO 00:08:17.204 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:17.204 #undef SPDK_CONFIG_CUSTOMOCF 00:08:17.204 #undef SPDK_CONFIG_DAOS 00:08:17.204 #define SPDK_CONFIG_DAOS_DIR 00:08:17.204 #define SPDK_CONFIG_DEBUG 1 00:08:17.204 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:17.204 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.204 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:08:17.204 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.204 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:17.204 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:08:17.204 #define SPDK_CONFIG_EXAMPLES 1 00:08:17.204 #undef SPDK_CONFIG_FC 00:08:17.204 #define SPDK_CONFIG_FC_PATH 00:08:17.204 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:17.204 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:17.204 #undef SPDK_CONFIG_FUSE 00:08:17.204 #undef SPDK_CONFIG_FUZZER 00:08:17.204 #define SPDK_CONFIG_FUZZER_LIB 00:08:17.204 #undef SPDK_CONFIG_GOLANG 00:08:17.204 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:17.204 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:17.204 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:17.204 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:17.204 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:17.204 #define SPDK_CONFIG_IDXD 1 00:08:17.204 #undef SPDK_CONFIG_IDXD_KERNEL 00:08:17.204 #undef SPDK_CONFIG_IPSEC_MB 00:08:17.204 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:17.204 #define SPDK_CONFIG_ISAL 1 00:08:17.204 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:17.204 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:17.204 #define SPDK_CONFIG_LIBDIR 00:08:17.204 #undef SPDK_CONFIG_LTO 00:08:17.204 #define SPDK_CONFIG_MAX_LCORES 00:08:17.204 #define SPDK_CONFIG_NVME_CUSE 1 00:08:17.204 #undef SPDK_CONFIG_OCF 00:08:17.204 #define SPDK_CONFIG_OCF_PATH 00:08:17.204 #define SPDK_CONFIG_OPENSSL_PATH 00:08:17.204 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:17.204 #undef SPDK_CONFIG_PGO_USE 00:08:17.204 #define SPDK_CONFIG_PREFIX /usr/local 00:08:17.204 #undef SPDK_CONFIG_RAID5F 00:08:17.204 #undef SPDK_CONFIG_RBD 00:08:17.204 #define SPDK_CONFIG_RDMA 1 00:08:17.204 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:17.204 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:17.204 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:17.204 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:17.204 #define SPDK_CONFIG_SHARED 1 00:08:17.204 #undef SPDK_CONFIG_SMA 00:08:17.204 #define SPDK_CONFIG_TESTS 1 00:08:17.204 #undef SPDK_CONFIG_TSAN 00:08:17.204 #define SPDK_CONFIG_UBLK 1 00:08:17.204 #define SPDK_CONFIG_UBSAN 1 00:08:17.204 #undef SPDK_CONFIG_UNIT_TESTS 00:08:17.204 #undef SPDK_CONFIG_URING 00:08:17.204 #define SPDK_CONFIG_URING_PATH 00:08:17.204 #undef SPDK_CONFIG_URING_ZNS 00:08:17.204 #undef SPDK_CONFIG_USDT 00:08:17.204 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:17.204 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:17.204 #define SPDK_CONFIG_VFIO_USER 1 00:08:17.204 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:17.204 #define SPDK_CONFIG_VHOST 1 00:08:17.204 #define SPDK_CONFIG_VIRTIO 1 00:08:17.204 #undef SPDK_CONFIG_VTUNE 00:08:17.204 #define SPDK_CONFIG_VTUNE_DIR 00:08:17.204 #define SPDK_CONFIG_WERROR 1 00:08:17.204 #define SPDK_CONFIG_WPDK_DIR 00:08:17.204 #undef SPDK_CONFIG_XNVME 00:08:17.204 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:17.204 01:43:02 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:17.204 01:43:02 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.204 01:43:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.204 01:43:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.204 01:43:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.204 01:43:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.204 01:43:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.204 01:43:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.204 01:43:02 -- paths/export.sh@5 -- # export PATH 00:08:17.204 01:43:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.204 01:43:02 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.204 01:43:02 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:08:17.204 01:43:02 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.204 01:43:02 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:08:17.204 01:43:02 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:17.204 01:43:02 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:17.204 01:43:02 -- pm/common@16 -- # TEST_TAG=N/A 00:08:17.204 01:43:02 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:08:17.204 01:43:02 -- common/autotest_common.sh@52 -- # : 1 00:08:17.204 01:43:02 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:17.204 01:43:02 -- common/autotest_common.sh@56 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:17.204 01:43:02 -- common/autotest_common.sh@58 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:17.204 01:43:02 -- common/autotest_common.sh@60 -- # : 1 00:08:17.204 01:43:02 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:17.204 01:43:02 -- common/autotest_common.sh@62 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:17.204 01:43:02 -- common/autotest_common.sh@64 -- # : 00:08:17.204 01:43:02 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:17.204 01:43:02 -- common/autotest_common.sh@66 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:17.204 01:43:02 -- common/autotest_common.sh@68 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:17.204 01:43:02 -- common/autotest_common.sh@70 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:17.204 01:43:02 -- common/autotest_common.sh@72 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:17.204 01:43:02 -- common/autotest_common.sh@74 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:17.204 01:43:02 -- common/autotest_common.sh@76 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:17.204 01:43:02 -- common/autotest_common.sh@78 -- # : 0 00:08:17.204 01:43:02 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:17.204 01:43:02 -- common/autotest_common.sh@80 -- # : 1 00:08:17.204 01:43:02 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:17.205 01:43:02 -- common/autotest_common.sh@82 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:17.205 01:43:02 -- common/autotest_common.sh@84 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:17.205 01:43:02 -- common/autotest_common.sh@86 -- # : 1 00:08:17.205 01:43:02 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:17.205 01:43:02 -- common/autotest_common.sh@88 -- # : 1 00:08:17.205 01:43:02 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:17.205 01:43:02 -- common/autotest_common.sh@90 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:17.205 01:43:02 -- common/autotest_common.sh@92 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:17.205 01:43:02 -- common/autotest_common.sh@94 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:17.205 01:43:02 -- common/autotest_common.sh@96 -- # : tcp 00:08:17.205 01:43:02 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:17.205 01:43:02 -- common/autotest_common.sh@98 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:17.205 01:43:02 -- common/autotest_common.sh@100 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:17.205 01:43:02 -- common/autotest_common.sh@102 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:17.205 01:43:02 -- common/autotest_common.sh@104 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:17.205 01:43:02 -- common/autotest_common.sh@106 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:17.205 01:43:02 -- common/autotest_common.sh@108 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:17.205 01:43:02 -- common/autotest_common.sh@110 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:17.205 01:43:02 -- common/autotest_common.sh@112 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:17.205 01:43:02 -- common/autotest_common.sh@114 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:17.205 01:43:02 -- common/autotest_common.sh@116 -- # : 1 00:08:17.205 01:43:02 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:17.205 01:43:02 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:08:17.205 01:43:02 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:17.205 01:43:02 -- common/autotest_common.sh@120 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:17.205 01:43:02 -- common/autotest_common.sh@122 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:17.205 01:43:02 -- common/autotest_common.sh@124 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:17.205 01:43:02 -- common/autotest_common.sh@126 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:17.205 01:43:02 -- common/autotest_common.sh@128 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:17.205 01:43:02 -- common/autotest_common.sh@130 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:17.205 01:43:02 -- common/autotest_common.sh@132 -- # : v23.11 00:08:17.205 01:43:02 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:17.205 01:43:02 -- common/autotest_common.sh@134 -- # : true 00:08:17.205 01:43:02 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:17.205 01:43:02 -- common/autotest_common.sh@136 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:17.205 01:43:02 -- common/autotest_common.sh@138 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:17.205 01:43:02 -- common/autotest_common.sh@140 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:17.205 01:43:02 -- common/autotest_common.sh@142 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:17.205 01:43:02 -- common/autotest_common.sh@144 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:17.205 01:43:02 -- common/autotest_common.sh@146 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:17.205 01:43:02 -- common/autotest_common.sh@148 -- # : e810 00:08:17.205 01:43:02 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:17.205 01:43:02 -- common/autotest_common.sh@150 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:17.205 01:43:02 -- common/autotest_common.sh@152 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:17.205 01:43:02 -- common/autotest_common.sh@154 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:17.205 01:43:02 -- common/autotest_common.sh@156 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:17.205 01:43:02 -- common/autotest_common.sh@158 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:17.205 01:43:02 -- common/autotest_common.sh@160 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:17.205 01:43:02 -- common/autotest_common.sh@163 -- # : 00:08:17.205 01:43:02 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:17.205 01:43:02 -- common/autotest_common.sh@165 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:17.205 01:43:02 -- common/autotest_common.sh@167 -- # : 0 00:08:17.205 01:43:02 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:17.205 01:43:02 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:17.205 01:43:02 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.205 01:43:02 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:17.205 01:43:02 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.205 01:43:02 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:08:17.205 01:43:02 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:17.205 01:43:02 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:17.205 01:43:02 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.205 01:43:02 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:17.205 01:43:02 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.205 01:43:02 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:17.205 01:43:02 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:17.205 01:43:02 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:17.205 01:43:02 -- common/autotest_common.sh@196 -- # cat 00:08:17.205 01:43:02 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:17.205 01:43:02 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.205 01:43:02 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:17.205 01:43:02 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.205 01:43:02 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:17.205 01:43:02 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:17.205 01:43:02 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:17.205 01:43:02 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.206 01:43:02 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:08:17.206 01:43:02 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.206 01:43:02 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:08:17.206 01:43:02 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.206 01:43:02 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:17.206 01:43:02 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.206 01:43:02 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:17.206 01:43:02 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.206 01:43:02 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:17.206 01:43:02 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.206 01:43:02 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:17.206 01:43:02 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:17.206 01:43:02 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:17.206 01:43:02 -- common/autotest_common.sh@249 -- # valgrind= 00:08:17.206 01:43:02 -- common/autotest_common.sh@255 -- # uname -s 00:08:17.206 01:43:02 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:17.206 01:43:02 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:17.206 01:43:02 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:17.206 01:43:02 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:17.206 01:43:02 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:17.206 01:43:02 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j48 00:08:17.206 01:43:02 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:17.206 01:43:02 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:17.206 01:43:02 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:08:17.206 01:43:02 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:17.206 01:43:02 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:17.206 01:43:02 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:17.206 01:43:02 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:17.206 01:43:02 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:17.206 01:43:02 -- common/autotest_common.sh@309 -- # [[ -z 2058444 ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@309 -- # kill -0 2058444 00:08:17.206 01:43:02 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:17.206 01:43:02 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:17.206 01:43:02 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:17.206 01:43:02 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:17.206 01:43:02 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:17.206 01:43:02 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:17.206 01:43:02 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:17.206 01:43:02 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.RUECai 00:08:17.206 01:43:02 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:17.206 01:43:02 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.RUECai/tests/target /tmp/spdk.RUECai 00:08:17.206 01:43:02 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@318 -- # df -T 00:08:17.206 01:43:02 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=996749312 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=4287680512 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=45513629696 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=61994721280 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=16481091584 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996103168 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997360640 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=12390178816 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=12398944256 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=8765440 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=30996717568 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=30997360640 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=643072 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # avails["$mount"]=6199468032 00:08:17.206 01:43:02 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6199472128 00:08:17.206 01:43:02 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:17.206 01:43:02 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:17.206 01:43:02 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:17.206 * Looking for test storage... 00:08:17.206 01:43:02 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:17.206 01:43:02 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:17.206 01:43:02 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.206 01:43:02 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:17.206 01:43:02 -- common/autotest_common.sh@363 -- # mount=/ 00:08:17.206 01:43:02 -- common/autotest_common.sh@365 -- # target_space=45513629696 00:08:17.206 01:43:02 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:17.206 01:43:02 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:17.206 01:43:02 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@372 -- # new_size=18695684096 00:08:17.206 01:43:02 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:17.206 01:43:02 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.206 01:43:02 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.206 01:43:02 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.206 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:17.206 01:43:02 -- common/autotest_common.sh@380 -- # return 0 00:08:17.206 01:43:02 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:17.206 01:43:02 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:17.206 01:43:02 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:17.206 01:43:02 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:17.206 01:43:02 -- common/autotest_common.sh@1672 -- # true 00:08:17.206 01:43:02 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:17.206 01:43:02 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:17.206 01:43:02 -- common/autotest_common.sh@27 -- # exec 00:08:17.206 01:43:02 -- common/autotest_common.sh@29 -- # exec 00:08:17.206 01:43:02 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:17.206 01:43:02 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:17.206 01:43:02 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:17.206 01:43:02 -- common/autotest_common.sh@18 -- # set -x 00:08:17.206 01:43:02 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:17.206 01:43:02 -- nvmf/common.sh@7 -- # uname -s 00:08:17.206 01:43:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:17.206 01:43:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:17.206 01:43:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:17.206 01:43:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:17.206 01:43:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:17.206 01:43:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:17.206 01:43:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:17.206 01:43:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:17.207 01:43:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:17.207 01:43:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:17.207 01:43:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.207 01:43:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:17.207 01:43:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:17.207 01:43:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:17.207 01:43:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:17.207 01:43:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:17.207 01:43:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:17.207 01:43:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:17.207 01:43:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:17.207 01:43:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.207 01:43:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.207 01:43:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.207 01:43:02 -- paths/export.sh@5 -- # export PATH 00:08:17.207 01:43:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:17.207 01:43:02 -- nvmf/common.sh@46 -- # : 0 00:08:17.207 01:43:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:17.207 01:43:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:17.207 01:43:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:17.207 01:43:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:17.207 01:43:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:17.207 01:43:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:17.207 01:43:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:17.207 01:43:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:17.207 01:43:02 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:17.207 01:43:02 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:17.207 01:43:02 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:17.207 01:43:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:17.207 01:43:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:17.207 01:43:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:17.207 01:43:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:17.207 01:43:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:17.207 01:43:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:17.207 01:43:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:17.207 01:43:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.207 01:43:02 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:17.207 01:43:02 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:17.207 01:43:02 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:17.207 01:43:02 -- common/autotest_common.sh@10 -- # set +x 00:08:19.110 01:43:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:19.110 01:43:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:19.110 01:43:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:19.110 01:43:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:19.110 01:43:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:19.110 01:43:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:19.110 01:43:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:19.110 01:43:04 -- nvmf/common.sh@294 -- # net_devs=() 00:08:19.110 01:43:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:19.110 01:43:04 -- nvmf/common.sh@295 -- # e810=() 00:08:19.110 01:43:04 -- nvmf/common.sh@295 -- # local -ga e810 00:08:19.110 01:43:04 -- nvmf/common.sh@296 -- # x722=() 00:08:19.110 01:43:04 -- nvmf/common.sh@296 -- # local -ga x722 00:08:19.110 01:43:04 -- nvmf/common.sh@297 -- # mlx=() 00:08:19.110 01:43:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:19.110 01:43:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:19.110 01:43:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:19.110 01:43:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:19.110 01:43:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:19.110 01:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:19.110 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:19.110 01:43:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:19.110 01:43:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:19.110 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:19.110 01:43:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:19.110 01:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.110 01:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.110 01:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:19.110 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:19.110 01:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.110 01:43:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:19.110 01:43:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:19.110 01:43:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:19.110 01:43:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:19.110 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:19.110 01:43:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:19.110 01:43:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:19.110 01:43:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:19.110 01:43:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:19.110 01:43:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:19.110 01:43:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:19.110 01:43:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:19.110 01:43:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:19.110 01:43:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:19.110 01:43:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:19.110 01:43:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:19.110 01:43:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:19.110 01:43:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:19.110 01:43:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:19.110 01:43:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:19.110 01:43:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:19.110 01:43:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:19.110 01:43:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:19.110 01:43:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:19.110 01:43:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:19.110 01:43:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:19.369 01:43:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:19.369 01:43:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:19.369 01:43:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:19.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:19.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:08:19.369 00:08:19.369 --- 10.0.0.2 ping statistics --- 00:08:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.369 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:08:19.369 01:43:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:19.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:19.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:08:19.369 00:08:19.369 --- 10.0.0.1 ping statistics --- 00:08:19.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:19.369 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:08:19.369 01:43:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:19.369 01:43:04 -- nvmf/common.sh@410 -- # return 0 00:08:19.369 01:43:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:19.369 01:43:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:19.369 01:43:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:19.369 01:43:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:19.369 01:43:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:19.369 01:43:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:19.369 01:43:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:19.369 01:43:04 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:19.369 01:43:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:19.369 01:43:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.369 01:43:04 -- common/autotest_common.sh@10 -- # set +x 00:08:19.369 ************************************ 00:08:19.369 START TEST nvmf_filesystem_no_in_capsule 00:08:19.369 ************************************ 00:08:19.369 01:43:04 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:19.369 01:43:04 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:19.369 01:43:04 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:19.369 01:43:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:19.369 01:43:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:19.369 01:43:04 -- common/autotest_common.sh@10 -- # set +x 00:08:19.369 01:43:04 -- nvmf/common.sh@469 -- # nvmfpid=2060063 00:08:19.369 01:43:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:19.369 01:43:04 -- nvmf/common.sh@470 -- # waitforlisten 2060063 00:08:19.369 01:43:04 -- common/autotest_common.sh@819 -- # '[' -z 2060063 ']' 00:08:19.369 01:43:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:19.369 01:43:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:19.369 01:43:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:19.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:19.369 01:43:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:19.369 01:43:04 -- common/autotest_common.sh@10 -- # set +x 00:08:19.369 [2024-04-15 01:43:04.868786] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:19.369 [2024-04-15 01:43:04.868869] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:19.369 EAL: No free 2048 kB hugepages reported on node 1 00:08:19.369 [2024-04-15 01:43:04.941595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:19.627 [2024-04-15 01:43:05.031528] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:19.628 [2024-04-15 01:43:05.031683] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:19.628 [2024-04-15 01:43:05.031700] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:19.628 [2024-04-15 01:43:05.031712] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:19.628 [2024-04-15 01:43:05.031761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:19.628 [2024-04-15 01:43:05.031819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:19.628 [2024-04-15 01:43:05.031885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:19.628 [2024-04-15 01:43:05.031887] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.194 01:43:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:20.194 01:43:05 -- common/autotest_common.sh@852 -- # return 0 00:08:20.194 01:43:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:20.194 01:43:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:20.194 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:20.194 01:43:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:20.194 01:43:05 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:20.194 01:43:05 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:20.194 01:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.194 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:20.194 [2024-04-15 01:43:05.824579] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:20.194 01:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.194 01:43:05 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:20.194 01:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.194 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:20.453 Malloc1 00:08:20.453 01:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.453 01:43:05 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:20.453 01:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.453 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:20.453 01:43:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.453 01:43:05 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:20.453 01:43:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.453 01:43:05 -- common/autotest_common.sh@10 -- # set +x 00:08:20.453 01:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.453 01:43:06 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:20.453 01:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.453 01:43:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.453 [2024-04-15 01:43:06.008657] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:20.453 01:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.453 01:43:06 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:20.453 01:43:06 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:20.453 01:43:06 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:20.453 01:43:06 -- common/autotest_common.sh@1359 -- # local bs 00:08:20.453 01:43:06 -- common/autotest_common.sh@1360 -- # local nb 00:08:20.453 01:43:06 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:20.453 01:43:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:20.453 01:43:06 -- common/autotest_common.sh@10 -- # set +x 00:08:20.453 01:43:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:20.453 01:43:06 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:20.453 { 00:08:20.453 "name": "Malloc1", 00:08:20.453 "aliases": [ 00:08:20.453 "e85eb10e-a2bd-44ef-b11d-b12b63ff505c" 00:08:20.453 ], 00:08:20.453 "product_name": "Malloc disk", 00:08:20.453 "block_size": 512, 00:08:20.453 "num_blocks": 1048576, 00:08:20.453 "uuid": "e85eb10e-a2bd-44ef-b11d-b12b63ff505c", 00:08:20.453 "assigned_rate_limits": { 00:08:20.453 "rw_ios_per_sec": 0, 00:08:20.453 "rw_mbytes_per_sec": 0, 00:08:20.453 "r_mbytes_per_sec": 0, 00:08:20.453 "w_mbytes_per_sec": 0 00:08:20.453 }, 00:08:20.453 "claimed": true, 00:08:20.453 "claim_type": "exclusive_write", 00:08:20.453 "zoned": false, 00:08:20.453 "supported_io_types": { 00:08:20.453 "read": true, 00:08:20.453 "write": true, 00:08:20.453 "unmap": true, 00:08:20.453 "write_zeroes": true, 00:08:20.453 "flush": true, 00:08:20.453 "reset": true, 00:08:20.453 "compare": false, 00:08:20.453 "compare_and_write": false, 00:08:20.453 "abort": true, 00:08:20.453 "nvme_admin": false, 00:08:20.453 "nvme_io": false 00:08:20.453 }, 00:08:20.453 "memory_domains": [ 00:08:20.453 { 00:08:20.453 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:20.453 "dma_device_type": 2 00:08:20.453 } 00:08:20.453 ], 00:08:20.453 "driver_specific": {} 00:08:20.453 } 00:08:20.453 ]' 00:08:20.453 01:43:06 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:20.453 01:43:06 -- common/autotest_common.sh@1362 -- # bs=512 00:08:20.453 01:43:06 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:20.711 01:43:06 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:20.711 01:43:06 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:20.711 01:43:06 -- common/autotest_common.sh@1367 -- # echo 512 00:08:20.711 01:43:06 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:20.711 01:43:06 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.276 01:43:06 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.276 01:43:06 -- common/autotest_common.sh@1177 -- # local i=0 00:08:21.276 01:43:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.276 01:43:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:21.276 01:43:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:23.175 01:43:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:23.175 01:43:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:23.175 01:43:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:23.175 01:43:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:23.175 01:43:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:23.175 01:43:08 -- common/autotest_common.sh@1187 -- # return 0 00:08:23.175 01:43:08 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:23.175 01:43:08 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:23.175 01:43:08 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:23.175 01:43:08 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:23.175 01:43:08 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:23.175 01:43:08 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:23.175 01:43:08 -- setup/common.sh@80 -- # echo 536870912 00:08:23.175 01:43:08 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:23.175 01:43:08 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:23.175 01:43:08 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:23.175 01:43:08 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:23.432 01:43:08 -- target/filesystem.sh@69 -- # partprobe 00:08:23.998 01:43:09 -- target/filesystem.sh@70 -- # sleep 1 00:08:25.371 01:43:10 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:25.371 01:43:10 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.371 01:43:10 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.371 01:43:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.371 01:43:10 -- common/autotest_common.sh@10 -- # set +x 00:08:25.371 ************************************ 00:08:25.371 START TEST filesystem_ext4 00:08:25.371 ************************************ 00:08:25.371 01:43:10 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.371 01:43:10 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.371 01:43:10 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.371 01:43:10 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.371 01:43:10 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:25.371 01:43:10 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.371 01:43:10 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.371 01:43:10 -- common/autotest_common.sh@905 -- # local force 00:08:25.371 01:43:10 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:25.371 01:43:10 -- common/autotest_common.sh@908 -- # force=-F 00:08:25.371 01:43:10 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.371 mke2fs 1.46.5 (30-Dec-2021) 00:08:25.371 Discarding device blocks: 0/522240 done 00:08:25.371 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:25.371 Filesystem UUID: 878c9a61-9470-4e6f-a850-561e5584ce2b 00:08:25.371 Superblock backups stored on blocks: 00:08:25.371 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:25.371 00:08:25.371 Allocating group tables: 0/64 done 00:08:25.371 Writing inode tables: 0/64 done 00:08:27.271 Creating journal (8192 blocks): done 00:08:27.271 Writing superblocks and filesystem accounting information: 0/64 done 00:08:27.271 00:08:27.271 01:43:12 -- common/autotest_common.sh@921 -- # return 0 00:08:27.271 01:43:12 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.243 01:43:13 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.243 01:43:13 -- target/filesystem.sh@25 -- # sync 00:08:28.243 01:43:13 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.243 01:43:13 -- target/filesystem.sh@27 -- # sync 00:08:28.243 01:43:13 -- target/filesystem.sh@29 -- # i=0 00:08:28.243 01:43:13 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.243 01:43:13 -- target/filesystem.sh@37 -- # kill -0 2060063 00:08:28.243 01:43:13 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.243 01:43:13 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.243 01:43:13 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.243 01:43:13 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.243 00:08:28.243 real 0m3.254s 00:08:28.243 user 0m0.014s 00:08:28.243 sys 0m0.068s 00:08:28.243 01:43:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.243 01:43:13 -- common/autotest_common.sh@10 -- # set +x 00:08:28.243 ************************************ 00:08:28.243 END TEST filesystem_ext4 00:08:28.243 ************************************ 00:08:28.243 01:43:13 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:28.243 01:43:13 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:28.243 01:43:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:28.243 01:43:13 -- common/autotest_common.sh@10 -- # set +x 00:08:28.243 ************************************ 00:08:28.243 START TEST filesystem_btrfs 00:08:28.243 ************************************ 00:08:28.243 01:43:13 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:28.243 01:43:13 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:28.243 01:43:13 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:28.243 01:43:13 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:28.243 01:43:13 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:28.243 01:43:13 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:28.243 01:43:13 -- common/autotest_common.sh@904 -- # local i=0 00:08:28.243 01:43:13 -- common/autotest_common.sh@905 -- # local force 00:08:28.243 01:43:13 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:28.243 01:43:13 -- common/autotest_common.sh@910 -- # force=-f 00:08:28.243 01:43:13 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:28.501 btrfs-progs v6.6.2 00:08:28.501 See https://btrfs.readthedocs.io for more information. 00:08:28.501 00:08:28.501 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:28.501 NOTE: several default settings have changed in version 5.15, please make sure 00:08:28.501 this does not affect your deployments: 00:08:28.501 - DUP for metadata (-m dup) 00:08:28.501 - enabled no-holes (-O no-holes) 00:08:28.501 - enabled free-space-tree (-R free-space-tree) 00:08:28.501 00:08:28.501 Label: (null) 00:08:28.501 UUID: b145f12a-cd8e-41b5-8a42-c3d8b14932d4 00:08:28.501 Node size: 16384 00:08:28.501 Sector size: 4096 00:08:28.501 Filesystem size: 510.00MiB 00:08:28.501 Block group profiles: 00:08:28.502 Data: single 8.00MiB 00:08:28.502 Metadata: DUP 32.00MiB 00:08:28.502 System: DUP 8.00MiB 00:08:28.502 SSD detected: yes 00:08:28.502 Zoned device: no 00:08:28.502 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:28.502 Runtime features: free-space-tree 00:08:28.502 Checksum: crc32c 00:08:28.502 Number of devices: 1 00:08:28.502 Devices: 00:08:28.502 ID SIZE PATH 00:08:28.502 1 510.00MiB /dev/nvme0n1p1 00:08:28.502 00:08:28.502 01:43:14 -- common/autotest_common.sh@921 -- # return 0 00:08:28.502 01:43:14 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:29.433 01:43:14 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:29.433 01:43:14 -- target/filesystem.sh@25 -- # sync 00:08:29.433 01:43:14 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:29.433 01:43:14 -- target/filesystem.sh@27 -- # sync 00:08:29.433 01:43:14 -- target/filesystem.sh@29 -- # i=0 00:08:29.433 01:43:14 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:29.433 01:43:14 -- target/filesystem.sh@37 -- # kill -0 2060063 00:08:29.433 01:43:14 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:29.433 01:43:14 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:29.433 01:43:14 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:29.433 01:43:14 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:29.433 00:08:29.433 real 0m1.109s 00:08:29.433 user 0m0.021s 00:08:29.433 sys 0m0.116s 00:08:29.433 01:43:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.433 01:43:14 -- common/autotest_common.sh@10 -- # set +x 00:08:29.433 ************************************ 00:08:29.433 END TEST filesystem_btrfs 00:08:29.433 ************************************ 00:08:29.433 01:43:15 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:29.433 01:43:15 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:29.433 01:43:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.433 01:43:15 -- common/autotest_common.sh@10 -- # set +x 00:08:29.433 ************************************ 00:08:29.433 START TEST filesystem_xfs 00:08:29.433 ************************************ 00:08:29.433 01:43:15 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:29.433 01:43:15 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:29.433 01:43:15 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:29.433 01:43:15 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:29.433 01:43:15 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:29.433 01:43:15 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:29.433 01:43:15 -- common/autotest_common.sh@904 -- # local i=0 00:08:29.433 01:43:15 -- common/autotest_common.sh@905 -- # local force 00:08:29.433 01:43:15 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:29.433 01:43:15 -- common/autotest_common.sh@910 -- # force=-f 00:08:29.433 01:43:15 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:29.690 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:29.690 = sectsz=512 attr=2, projid32bit=1 00:08:29.690 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:29.690 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:29.690 data = bsize=4096 blocks=130560, imaxpct=25 00:08:29.690 = sunit=0 swidth=0 blks 00:08:29.690 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:29.690 log =internal log bsize=4096 blocks=16384, version=2 00:08:29.690 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:29.690 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:30.621 Discarding blocks...Done. 00:08:30.621 01:43:15 -- common/autotest_common.sh@921 -- # return 0 00:08:30.621 01:43:15 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:33.147 01:43:18 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:33.147 01:43:18 -- target/filesystem.sh@25 -- # sync 00:08:33.147 01:43:18 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:33.147 01:43:18 -- target/filesystem.sh@27 -- # sync 00:08:33.147 01:43:18 -- target/filesystem.sh@29 -- # i=0 00:08:33.147 01:43:18 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:33.147 01:43:18 -- target/filesystem.sh@37 -- # kill -0 2060063 00:08:33.147 01:43:18 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:33.147 01:43:18 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:33.147 01:43:18 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:33.147 01:43:18 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:33.147 00:08:33.147 real 0m3.303s 00:08:33.147 user 0m0.019s 00:08:33.147 sys 0m0.064s 00:08:33.147 01:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.147 01:43:18 -- common/autotest_common.sh@10 -- # set +x 00:08:33.147 ************************************ 00:08:33.147 END TEST filesystem_xfs 00:08:33.147 ************************************ 00:08:33.147 01:43:18 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:33.147 01:43:18 -- target/filesystem.sh@93 -- # sync 00:08:33.147 01:43:18 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:33.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:33.147 01:43:18 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:33.147 01:43:18 -- common/autotest_common.sh@1198 -- # local i=0 00:08:33.147 01:43:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:33.147 01:43:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.147 01:43:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:33.147 01:43:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:33.147 01:43:18 -- common/autotest_common.sh@1210 -- # return 0 00:08:33.147 01:43:18 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:33.147 01:43:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.147 01:43:18 -- common/autotest_common.sh@10 -- # set +x 00:08:33.147 01:43:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.147 01:43:18 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:33.147 01:43:18 -- target/filesystem.sh@101 -- # killprocess 2060063 00:08:33.147 01:43:18 -- common/autotest_common.sh@926 -- # '[' -z 2060063 ']' 00:08:33.147 01:43:18 -- common/autotest_common.sh@930 -- # kill -0 2060063 00:08:33.147 01:43:18 -- common/autotest_common.sh@931 -- # uname 00:08:33.147 01:43:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:33.147 01:43:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2060063 00:08:33.147 01:43:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:33.147 01:43:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:33.147 01:43:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2060063' 00:08:33.147 killing process with pid 2060063 00:08:33.147 01:43:18 -- common/autotest_common.sh@945 -- # kill 2060063 00:08:33.147 01:43:18 -- common/autotest_common.sh@950 -- # wait 2060063 00:08:33.406 01:43:18 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:33.406 00:08:33.406 real 0m14.162s 00:08:33.406 user 0m54.690s 00:08:33.406 sys 0m1.937s 00:08:33.406 01:43:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:33.406 01:43:18 -- common/autotest_common.sh@10 -- # set +x 00:08:33.406 ************************************ 00:08:33.406 END TEST nvmf_filesystem_no_in_capsule 00:08:33.406 ************************************ 00:08:33.406 01:43:19 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:33.406 01:43:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:33.406 01:43:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:33.406 01:43:19 -- common/autotest_common.sh@10 -- # set +x 00:08:33.406 ************************************ 00:08:33.406 START TEST nvmf_filesystem_in_capsule 00:08:33.406 ************************************ 00:08:33.406 01:43:19 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:33.406 01:43:19 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:33.406 01:43:19 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:33.406 01:43:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:33.406 01:43:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:33.406 01:43:19 -- common/autotest_common.sh@10 -- # set +x 00:08:33.406 01:43:19 -- nvmf/common.sh@469 -- # nvmfpid=2061946 00:08:33.406 01:43:19 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:33.406 01:43:19 -- nvmf/common.sh@470 -- # waitforlisten 2061946 00:08:33.406 01:43:19 -- common/autotest_common.sh@819 -- # '[' -z 2061946 ']' 00:08:33.406 01:43:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:33.406 01:43:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:33.406 01:43:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:33.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:33.406 01:43:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:33.406 01:43:19 -- common/autotest_common.sh@10 -- # set +x 00:08:33.665 [2024-04-15 01:43:19.059665] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:33.665 [2024-04-15 01:43:19.059742] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:33.665 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.665 [2024-04-15 01:43:19.136350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:33.665 [2024-04-15 01:43:19.228432] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:33.665 [2024-04-15 01:43:19.228638] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:33.665 [2024-04-15 01:43:19.228664] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:33.665 [2024-04-15 01:43:19.228686] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:33.665 [2024-04-15 01:43:19.228753] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:33.665 [2024-04-15 01:43:19.228827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:33.665 [2024-04-15 01:43:19.228950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.665 [2024-04-15 01:43:19.228956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.599 01:43:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:34.599 01:43:19 -- common/autotest_common.sh@852 -- # return 0 00:08:34.599 01:43:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:34.599 01:43:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 01:43:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:34.599 01:43:20 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:34.599 01:43:20 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 [2024-04-15 01:43:20.027560] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 [2024-04-15 01:43:20.223591] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:34.599 01:43:20 -- common/autotest_common.sh@1359 -- # local bs 00:08:34.599 01:43:20 -- common/autotest_common.sh@1360 -- # local nb 00:08:34.599 01:43:20 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:34.599 01:43:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 01:43:20 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 01:43:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 01:43:20 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:34.599 { 00:08:34.599 "name": "Malloc1", 00:08:34.599 "aliases": [ 00:08:34.599 "09db9a99-44b3-4f98-8705-628e1721ab5d" 00:08:34.599 ], 00:08:34.599 "product_name": "Malloc disk", 00:08:34.599 "block_size": 512, 00:08:34.599 "num_blocks": 1048576, 00:08:34.600 "uuid": "09db9a99-44b3-4f98-8705-628e1721ab5d", 00:08:34.600 "assigned_rate_limits": { 00:08:34.600 "rw_ios_per_sec": 0, 00:08:34.600 "rw_mbytes_per_sec": 0, 00:08:34.600 "r_mbytes_per_sec": 0, 00:08:34.600 "w_mbytes_per_sec": 0 00:08:34.600 }, 00:08:34.600 "claimed": true, 00:08:34.600 "claim_type": "exclusive_write", 00:08:34.600 "zoned": false, 00:08:34.600 "supported_io_types": { 00:08:34.600 "read": true, 00:08:34.600 "write": true, 00:08:34.600 "unmap": true, 00:08:34.600 "write_zeroes": true, 00:08:34.600 "flush": true, 00:08:34.600 "reset": true, 00:08:34.600 "compare": false, 00:08:34.600 "compare_and_write": false, 00:08:34.600 "abort": true, 00:08:34.600 "nvme_admin": false, 00:08:34.600 "nvme_io": false 00:08:34.600 }, 00:08:34.600 "memory_domains": [ 00:08:34.600 { 00:08:34.600 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.600 "dma_device_type": 2 00:08:34.600 } 00:08:34.600 ], 00:08:34.600 "driver_specific": {} 00:08:34.600 } 00:08:34.600 ]' 00:08:34.600 01:43:20 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:34.857 01:43:20 -- common/autotest_common.sh@1362 -- # bs=512 00:08:34.857 01:43:20 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:34.857 01:43:20 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:34.857 01:43:20 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:34.857 01:43:20 -- common/autotest_common.sh@1367 -- # echo 512 00:08:34.857 01:43:20 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:34.858 01:43:20 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:35.424 01:43:20 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:35.424 01:43:20 -- common/autotest_common.sh@1177 -- # local i=0 00:08:35.424 01:43:20 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:35.424 01:43:20 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:35.424 01:43:20 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:37.321 01:43:22 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:37.321 01:43:22 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:37.321 01:43:22 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:37.321 01:43:22 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:37.321 01:43:22 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:37.321 01:43:22 -- common/autotest_common.sh@1187 -- # return 0 00:08:37.321 01:43:22 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:37.321 01:43:22 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:37.321 01:43:22 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:37.321 01:43:22 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:37.321 01:43:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:37.321 01:43:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:37.321 01:43:22 -- setup/common.sh@80 -- # echo 536870912 00:08:37.321 01:43:22 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:37.321 01:43:22 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:37.321 01:43:22 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:37.321 01:43:22 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:37.885 01:43:23 -- target/filesystem.sh@69 -- # partprobe 00:08:38.816 01:43:24 -- target/filesystem.sh@70 -- # sleep 1 00:08:39.749 01:43:25 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:39.749 01:43:25 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:39.749 01:43:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:39.749 01:43:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.749 01:43:25 -- common/autotest_common.sh@10 -- # set +x 00:08:39.749 ************************************ 00:08:39.749 START TEST filesystem_in_capsule_ext4 00:08:39.749 ************************************ 00:08:39.749 01:43:25 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:39.749 01:43:25 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:39.749 01:43:25 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:39.749 01:43:25 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:39.749 01:43:25 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:39.749 01:43:25 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:39.749 01:43:25 -- common/autotest_common.sh@904 -- # local i=0 00:08:39.749 01:43:25 -- common/autotest_common.sh@905 -- # local force 00:08:39.749 01:43:25 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:39.749 01:43:25 -- common/autotest_common.sh@908 -- # force=-F 00:08:39.749 01:43:25 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:39.749 mke2fs 1.46.5 (30-Dec-2021) 00:08:39.749 Discarding device blocks: 0/522240 done 00:08:39.749 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:39.749 Filesystem UUID: c3329d6a-3cc6-4051-8439-07e470bfe585 00:08:39.749 Superblock backups stored on blocks: 00:08:39.749 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:39.749 00:08:39.749 Allocating group tables: 0/64 done 00:08:39.749 Writing inode tables: 0/64 done 00:08:40.007 Creating journal (8192 blocks): done 00:08:40.829 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:08:40.829 00:08:40.829 01:43:26 -- common/autotest_common.sh@921 -- # return 0 00:08:40.829 01:43:26 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:41.763 01:43:27 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:41.763 01:43:27 -- target/filesystem.sh@25 -- # sync 00:08:41.763 01:43:27 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:41.763 01:43:27 -- target/filesystem.sh@27 -- # sync 00:08:41.763 01:43:27 -- target/filesystem.sh@29 -- # i=0 00:08:41.763 01:43:27 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:41.763 01:43:27 -- target/filesystem.sh@37 -- # kill -0 2061946 00:08:41.763 01:43:27 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:41.763 01:43:27 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:41.763 01:43:27 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:41.763 01:43:27 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:41.763 00:08:41.763 real 0m1.991s 00:08:41.763 user 0m0.015s 00:08:41.763 sys 0m0.055s 00:08:41.763 01:43:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.763 01:43:27 -- common/autotest_common.sh@10 -- # set +x 00:08:41.763 ************************************ 00:08:41.763 END TEST filesystem_in_capsule_ext4 00:08:41.763 ************************************ 00:08:41.763 01:43:27 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:41.763 01:43:27 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:41.763 01:43:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.763 01:43:27 -- common/autotest_common.sh@10 -- # set +x 00:08:41.763 ************************************ 00:08:41.763 START TEST filesystem_in_capsule_btrfs 00:08:41.763 ************************************ 00:08:41.763 01:43:27 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:41.763 01:43:27 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:41.763 01:43:27 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:41.763 01:43:27 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:41.763 01:43:27 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:41.763 01:43:27 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:41.763 01:43:27 -- common/autotest_common.sh@904 -- # local i=0 00:08:41.763 01:43:27 -- common/autotest_common.sh@905 -- # local force 00:08:41.763 01:43:27 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:41.763 01:43:27 -- common/autotest_common.sh@910 -- # force=-f 00:08:41.763 01:43:27 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:42.329 btrfs-progs v6.6.2 00:08:42.329 See https://btrfs.readthedocs.io for more information. 00:08:42.329 00:08:42.329 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:42.329 NOTE: several default settings have changed in version 5.15, please make sure 00:08:42.329 this does not affect your deployments: 00:08:42.329 - DUP for metadata (-m dup) 00:08:42.329 - enabled no-holes (-O no-holes) 00:08:42.329 - enabled free-space-tree (-R free-space-tree) 00:08:42.329 00:08:42.329 Label: (null) 00:08:42.329 UUID: c74bcf31-d7df-4303-a628-4fb51938c538 00:08:42.329 Node size: 16384 00:08:42.329 Sector size: 4096 00:08:42.329 Filesystem size: 510.00MiB 00:08:42.329 Block group profiles: 00:08:42.329 Data: single 8.00MiB 00:08:42.329 Metadata: DUP 32.00MiB 00:08:42.329 System: DUP 8.00MiB 00:08:42.329 SSD detected: yes 00:08:42.329 Zoned device: no 00:08:42.329 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:42.329 Runtime features: free-space-tree 00:08:42.329 Checksum: crc32c 00:08:42.329 Number of devices: 1 00:08:42.329 Devices: 00:08:42.329 ID SIZE PATH 00:08:42.329 1 510.00MiB /dev/nvme0n1p1 00:08:42.329 00:08:42.329 01:43:27 -- common/autotest_common.sh@921 -- # return 0 00:08:42.329 01:43:27 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:42.587 01:43:28 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:42.587 01:43:28 -- target/filesystem.sh@25 -- # sync 00:08:42.587 01:43:28 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:42.587 01:43:28 -- target/filesystem.sh@27 -- # sync 00:08:42.587 01:43:28 -- target/filesystem.sh@29 -- # i=0 00:08:42.587 01:43:28 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:42.587 01:43:28 -- target/filesystem.sh@37 -- # kill -0 2061946 00:08:42.587 01:43:28 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:42.587 01:43:28 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:42.587 01:43:28 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:42.587 01:43:28 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:42.587 00:08:42.587 real 0m0.849s 00:08:42.587 user 0m0.027s 00:08:42.587 sys 0m0.107s 00:08:42.587 01:43:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.587 01:43:28 -- common/autotest_common.sh@10 -- # set +x 00:08:42.587 ************************************ 00:08:42.587 END TEST filesystem_in_capsule_btrfs 00:08:42.587 ************************************ 00:08:42.587 01:43:28 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:42.587 01:43:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:42.587 01:43:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.587 01:43:28 -- common/autotest_common.sh@10 -- # set +x 00:08:42.587 ************************************ 00:08:42.587 START TEST filesystem_in_capsule_xfs 00:08:42.587 ************************************ 00:08:42.587 01:43:28 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:42.587 01:43:28 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:42.587 01:43:28 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:42.587 01:43:28 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:42.587 01:43:28 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:42.587 01:43:28 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:42.587 01:43:28 -- common/autotest_common.sh@904 -- # local i=0 00:08:42.587 01:43:28 -- common/autotest_common.sh@905 -- # local force 00:08:42.587 01:43:28 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:42.587 01:43:28 -- common/autotest_common.sh@910 -- # force=-f 00:08:42.587 01:43:28 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:42.587 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:42.587 = sectsz=512 attr=2, projid32bit=1 00:08:42.587 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:42.587 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:42.587 data = bsize=4096 blocks=130560, imaxpct=25 00:08:42.588 = sunit=0 swidth=0 blks 00:08:42.588 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:42.588 log =internal log bsize=4096 blocks=16384, version=2 00:08:42.588 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:42.588 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:43.521 Discarding blocks...Done. 00:08:43.521 01:43:28 -- common/autotest_common.sh@921 -- # return 0 00:08:43.521 01:43:28 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:46.062 01:43:31 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:46.062 01:43:31 -- target/filesystem.sh@25 -- # sync 00:08:46.062 01:43:31 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:46.062 01:43:31 -- target/filesystem.sh@27 -- # sync 00:08:46.062 01:43:31 -- target/filesystem.sh@29 -- # i=0 00:08:46.062 01:43:31 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:46.062 01:43:31 -- target/filesystem.sh@37 -- # kill -0 2061946 00:08:46.062 01:43:31 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:46.062 01:43:31 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:46.062 01:43:31 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:46.062 01:43:31 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:46.062 00:08:46.062 real 0m3.499s 00:08:46.062 user 0m0.010s 00:08:46.062 sys 0m0.072s 00:08:46.062 01:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.062 01:43:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.062 ************************************ 00:08:46.062 END TEST filesystem_in_capsule_xfs 00:08:46.062 ************************************ 00:08:46.062 01:43:31 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:46.062 01:43:31 -- target/filesystem.sh@93 -- # sync 00:08:46.062 01:43:31 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:46.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:46.321 01:43:31 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:46.321 01:43:31 -- common/autotest_common.sh@1198 -- # local i=0 00:08:46.321 01:43:31 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:46.321 01:43:31 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.321 01:43:31 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:46.321 01:43:31 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:46.321 01:43:31 -- common/autotest_common.sh@1210 -- # return 0 00:08:46.321 01:43:31 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:46.321 01:43:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.321 01:43:31 -- common/autotest_common.sh@10 -- # set +x 00:08:46.321 01:43:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.321 01:43:31 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:46.321 01:43:31 -- target/filesystem.sh@101 -- # killprocess 2061946 00:08:46.321 01:43:31 -- common/autotest_common.sh@926 -- # '[' -z 2061946 ']' 00:08:46.321 01:43:31 -- common/autotest_common.sh@930 -- # kill -0 2061946 00:08:46.321 01:43:31 -- common/autotest_common.sh@931 -- # uname 00:08:46.321 01:43:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:46.321 01:43:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2061946 00:08:46.321 01:43:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:46.321 01:43:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:46.321 01:43:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2061946' 00:08:46.321 killing process with pid 2061946 00:08:46.321 01:43:31 -- common/autotest_common.sh@945 -- # kill 2061946 00:08:46.321 01:43:31 -- common/autotest_common.sh@950 -- # wait 2061946 00:08:46.888 01:43:32 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:46.888 00:08:46.888 real 0m13.313s 00:08:46.888 user 0m51.333s 00:08:46.888 sys 0m1.901s 00:08:46.888 01:43:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.888 01:43:32 -- common/autotest_common.sh@10 -- # set +x 00:08:46.888 ************************************ 00:08:46.888 END TEST nvmf_filesystem_in_capsule 00:08:46.888 ************************************ 00:08:46.888 01:43:32 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:46.888 01:43:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:46.888 01:43:32 -- nvmf/common.sh@116 -- # sync 00:08:46.888 01:43:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:46.888 01:43:32 -- nvmf/common.sh@119 -- # set +e 00:08:46.888 01:43:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:46.888 01:43:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:46.888 rmmod nvme_tcp 00:08:46.888 rmmod nvme_fabrics 00:08:46.888 rmmod nvme_keyring 00:08:46.888 01:43:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:46.888 01:43:32 -- nvmf/common.sh@123 -- # set -e 00:08:46.888 01:43:32 -- nvmf/common.sh@124 -- # return 0 00:08:46.888 01:43:32 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:46.888 01:43:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:46.888 01:43:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:46.888 01:43:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:46.888 01:43:32 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:46.888 01:43:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:46.888 01:43:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.888 01:43:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:46.888 01:43:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:48.795 01:43:34 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:48.795 00:08:48.795 real 0m31.811s 00:08:48.795 user 1m46.866s 00:08:48.795 sys 0m5.339s 00:08:48.795 01:43:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.795 01:43:34 -- common/autotest_common.sh@10 -- # set +x 00:08:48.795 ************************************ 00:08:48.795 END TEST nvmf_filesystem 00:08:48.795 ************************************ 00:08:49.053 01:43:34 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:49.053 01:43:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:49.053 01:43:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:49.053 01:43:34 -- common/autotest_common.sh@10 -- # set +x 00:08:49.053 ************************************ 00:08:49.053 START TEST nvmf_discovery 00:08:49.053 ************************************ 00:08:49.053 01:43:34 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:49.053 * Looking for test storage... 00:08:49.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:49.053 01:43:34 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.053 01:43:34 -- nvmf/common.sh@7 -- # uname -s 00:08:49.053 01:43:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.053 01:43:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.053 01:43:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.053 01:43:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.053 01:43:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.053 01:43:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.053 01:43:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.053 01:43:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.053 01:43:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.053 01:43:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.053 01:43:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.053 01:43:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:49.053 01:43:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.053 01:43:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.053 01:43:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.053 01:43:34 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:49.053 01:43:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.053 01:43:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.053 01:43:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.054 01:43:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.054 01:43:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.054 01:43:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.054 01:43:34 -- paths/export.sh@5 -- # export PATH 00:08:49.054 01:43:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.054 01:43:34 -- nvmf/common.sh@46 -- # : 0 00:08:49.054 01:43:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:49.054 01:43:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:49.054 01:43:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:49.054 01:43:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.054 01:43:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.054 01:43:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:49.054 01:43:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:49.054 01:43:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:49.054 01:43:34 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:49.054 01:43:34 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:49.054 01:43:34 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:49.054 01:43:34 -- target/discovery.sh@15 -- # hash nvme 00:08:49.054 01:43:34 -- target/discovery.sh@20 -- # nvmftestinit 00:08:49.054 01:43:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:49.054 01:43:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.054 01:43:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:49.054 01:43:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:49.054 01:43:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:49.054 01:43:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.054 01:43:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.054 01:43:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.054 01:43:34 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:49.054 01:43:34 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:49.054 01:43:34 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:49.054 01:43:34 -- common/autotest_common.sh@10 -- # set +x 00:08:51.584 01:43:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:51.584 01:43:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:51.584 01:43:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:51.584 01:43:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:51.584 01:43:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:51.584 01:43:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:51.584 01:43:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:51.584 01:43:36 -- nvmf/common.sh@294 -- # net_devs=() 00:08:51.585 01:43:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:51.585 01:43:36 -- nvmf/common.sh@295 -- # e810=() 00:08:51.585 01:43:36 -- nvmf/common.sh@295 -- # local -ga e810 00:08:51.585 01:43:36 -- nvmf/common.sh@296 -- # x722=() 00:08:51.585 01:43:36 -- nvmf/common.sh@296 -- # local -ga x722 00:08:51.585 01:43:36 -- nvmf/common.sh@297 -- # mlx=() 00:08:51.585 01:43:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:51.585 01:43:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:51.585 01:43:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:51.585 01:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:51.585 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:51.585 01:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:51.585 01:43:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:51.585 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:51.585 01:43:36 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:51.585 01:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.585 01:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.585 01:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:51.585 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:51.585 01:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:51.585 01:43:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:51.585 01:43:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:51.585 01:43:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:51.585 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:51.585 01:43:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:51.585 01:43:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:51.585 01:43:36 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:51.585 01:43:36 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:51.585 01:43:36 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:51.585 01:43:36 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:51.585 01:43:36 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:51.585 01:43:36 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:51.585 01:43:36 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:51.585 01:43:36 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:51.585 01:43:36 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:51.585 01:43:36 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:51.585 01:43:36 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:51.585 01:43:36 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:51.585 01:43:36 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:51.585 01:43:36 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:51.585 01:43:36 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:51.585 01:43:36 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:51.585 01:43:36 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:51.585 01:43:36 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:51.585 01:43:36 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:51.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:51.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:08:51.585 00:08:51.585 --- 10.0.0.2 ping statistics --- 00:08:51.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.585 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:51.585 01:43:36 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:51.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:51.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:08:51.585 00:08:51.585 --- 10.0.0.1 ping statistics --- 00:08:51.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:51.585 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:08:51.585 01:43:36 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:51.585 01:43:36 -- nvmf/common.sh@410 -- # return 0 00:08:51.585 01:43:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:51.585 01:43:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:51.585 01:43:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:51.585 01:43:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:51.585 01:43:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:51.585 01:43:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:51.585 01:43:36 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:51.585 01:43:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:51.585 01:43:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:51.585 01:43:36 -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 01:43:36 -- nvmf/common.sh@469 -- # nvmfpid=2065740 00:08:51.585 01:43:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:51.585 01:43:36 -- nvmf/common.sh@470 -- # waitforlisten 2065740 00:08:51.585 01:43:36 -- common/autotest_common.sh@819 -- # '[' -z 2065740 ']' 00:08:51.585 01:43:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.585 01:43:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:51.585 01:43:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.585 01:43:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:51.585 01:43:36 -- common/autotest_common.sh@10 -- # set +x 00:08:51.585 [2024-04-15 01:43:36.896615] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:51.585 [2024-04-15 01:43:36.896703] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:51.585 EAL: No free 2048 kB hugepages reported on node 1 00:08:51.585 [2024-04-15 01:43:36.967571] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:51.585 [2024-04-15 01:43:37.059736] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:51.585 [2024-04-15 01:43:37.059883] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:51.585 [2024-04-15 01:43:37.059903] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:51.585 [2024-04-15 01:43:37.059918] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:51.585 [2024-04-15 01:43:37.060016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:51.585 [2024-04-15 01:43:37.060075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:51.585 [2024-04-15 01:43:37.060108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:51.585 [2024-04-15 01:43:37.060113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.517 01:43:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:52.517 01:43:37 -- common/autotest_common.sh@852 -- # return 0 00:08:52.517 01:43:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:52.517 01:43:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.517 01:43:37 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 [2024-04-15 01:43:37.889671] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@26 -- # seq 1 4 00:08:52.517 01:43:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:52.517 01:43:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 Null1 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 [2024-04-15 01:43:37.929952] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:52.517 01:43:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 Null2 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:52.517 01:43:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 Null3 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:37 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:52.517 01:43:37 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:52.517 01:43:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:37 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 Null4 00:08:52.517 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:38 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:52.517 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:38 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:52.517 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:38 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:52.517 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.517 01:43:38 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:52.517 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.517 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.517 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.518 01:43:38 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:52.518 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.518 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.518 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.518 01:43:38 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:52.775 00:08:52.775 Discovery Log Number of Records 6, Generation counter 6 00:08:52.775 =====Discovery Log Entry 0====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: current discovery subsystem 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4420 00:08:52.775 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: explicit discovery connections, duplicate discovery information 00:08:52.775 sectype: none 00:08:52.775 =====Discovery Log Entry 1====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: nvme subsystem 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4420 00:08:52.775 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: none 00:08:52.775 sectype: none 00:08:52.775 =====Discovery Log Entry 2====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: nvme subsystem 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4420 00:08:52.775 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: none 00:08:52.775 sectype: none 00:08:52.775 =====Discovery Log Entry 3====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: nvme subsystem 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4420 00:08:52.775 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: none 00:08:52.775 sectype: none 00:08:52.775 =====Discovery Log Entry 4====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: nvme subsystem 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4420 00:08:52.775 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: none 00:08:52.775 sectype: none 00:08:52.775 =====Discovery Log Entry 5====== 00:08:52.775 trtype: tcp 00:08:52.775 adrfam: ipv4 00:08:52.775 subtype: discovery subsystem referral 00:08:52.775 treq: not required 00:08:52.775 portid: 0 00:08:52.775 trsvcid: 4430 00:08:52.775 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:52.775 traddr: 10.0.0.2 00:08:52.775 eflags: none 00:08:52.775 sectype: none 00:08:52.775 01:43:38 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:52.775 Perform nvmf subsystem discovery via RPC 00:08:52.775 01:43:38 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:52.775 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.775 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.775 [2024-04-15 01:43:38.258853] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:52.775 [ 00:08:52.776 { 00:08:52.776 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:52.776 "subtype": "Discovery", 00:08:52.776 "listen_addresses": [ 00:08:52.776 { 00:08:52.776 "transport": "TCP", 00:08:52.776 "trtype": "TCP", 00:08:52.776 "adrfam": "IPv4", 00:08:52.776 "traddr": "10.0.0.2", 00:08:52.776 "trsvcid": "4420" 00:08:52.776 } 00:08:52.776 ], 00:08:52.776 "allow_any_host": true, 00:08:52.776 "hosts": [] 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.776 "subtype": "NVMe", 00:08:52.776 "listen_addresses": [ 00:08:52.776 { 00:08:52.776 "transport": "TCP", 00:08:52.776 "trtype": "TCP", 00:08:52.776 "adrfam": "IPv4", 00:08:52.776 "traddr": "10.0.0.2", 00:08:52.776 "trsvcid": "4420" 00:08:52.776 } 00:08:52.776 ], 00:08:52.776 "allow_any_host": true, 00:08:52.776 "hosts": [], 00:08:52.776 "serial_number": "SPDK00000000000001", 00:08:52.776 "model_number": "SPDK bdev Controller", 00:08:52.776 "max_namespaces": 32, 00:08:52.776 "min_cntlid": 1, 00:08:52.776 "max_cntlid": 65519, 00:08:52.776 "namespaces": [ 00:08:52.776 { 00:08:52.776 "nsid": 1, 00:08:52.776 "bdev_name": "Null1", 00:08:52.776 "name": "Null1", 00:08:52.776 "nguid": "47F8F8353C824686A521EB773D0434B1", 00:08:52.776 "uuid": "47f8f835-3c82-4686-a521-eb773d0434b1" 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:52.776 "subtype": "NVMe", 00:08:52.776 "listen_addresses": [ 00:08:52.776 { 00:08:52.776 "transport": "TCP", 00:08:52.776 "trtype": "TCP", 00:08:52.776 "adrfam": "IPv4", 00:08:52.776 "traddr": "10.0.0.2", 00:08:52.776 "trsvcid": "4420" 00:08:52.776 } 00:08:52.776 ], 00:08:52.776 "allow_any_host": true, 00:08:52.776 "hosts": [], 00:08:52.776 "serial_number": "SPDK00000000000002", 00:08:52.776 "model_number": "SPDK bdev Controller", 00:08:52.776 "max_namespaces": 32, 00:08:52.776 "min_cntlid": 1, 00:08:52.776 "max_cntlid": 65519, 00:08:52.776 "namespaces": [ 00:08:52.776 { 00:08:52.776 "nsid": 1, 00:08:52.776 "bdev_name": "Null2", 00:08:52.776 "name": "Null2", 00:08:52.776 "nguid": "610C2BB6DA524F8BAF60A0C6C5D24810", 00:08:52.776 "uuid": "610c2bb6-da52-4f8b-af60-a0c6c5d24810" 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:52.776 "subtype": "NVMe", 00:08:52.776 "listen_addresses": [ 00:08:52.776 { 00:08:52.776 "transport": "TCP", 00:08:52.776 "trtype": "TCP", 00:08:52.776 "adrfam": "IPv4", 00:08:52.776 "traddr": "10.0.0.2", 00:08:52.776 "trsvcid": "4420" 00:08:52.776 } 00:08:52.776 ], 00:08:52.776 "allow_any_host": true, 00:08:52.776 "hosts": [], 00:08:52.776 "serial_number": "SPDK00000000000003", 00:08:52.776 "model_number": "SPDK bdev Controller", 00:08:52.776 "max_namespaces": 32, 00:08:52.776 "min_cntlid": 1, 00:08:52.776 "max_cntlid": 65519, 00:08:52.776 "namespaces": [ 00:08:52.776 { 00:08:52.776 "nsid": 1, 00:08:52.776 "bdev_name": "Null3", 00:08:52.776 "name": "Null3", 00:08:52.776 "nguid": "69A4C9CCD8AE4726B741D367B2BE92E6", 00:08:52.776 "uuid": "69a4c9cc-d8ae-4726-b741-d367b2be92e6" 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 }, 00:08:52.776 { 00:08:52.776 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:52.776 "subtype": "NVMe", 00:08:52.776 "listen_addresses": [ 00:08:52.776 { 00:08:52.776 "transport": "TCP", 00:08:52.776 "trtype": "TCP", 00:08:52.776 "adrfam": "IPv4", 00:08:52.776 "traddr": "10.0.0.2", 00:08:52.776 "trsvcid": "4420" 00:08:52.776 } 00:08:52.776 ], 00:08:52.776 "allow_any_host": true, 00:08:52.776 "hosts": [], 00:08:52.776 "serial_number": "SPDK00000000000004", 00:08:52.776 "model_number": "SPDK bdev Controller", 00:08:52.776 "max_namespaces": 32, 00:08:52.776 "min_cntlid": 1, 00:08:52.776 "max_cntlid": 65519, 00:08:52.776 "namespaces": [ 00:08:52.776 { 00:08:52.776 "nsid": 1, 00:08:52.776 "bdev_name": "Null4", 00:08:52.776 "name": "Null4", 00:08:52.776 "nguid": "A07B88C89B9443B2A65C322755BB6845", 00:08:52.776 "uuid": "a07b88c8-9b94-43b2-a65c-322755bb6845" 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 } 00:08:52.776 ] 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@42 -- # seq 1 4 00:08:52.776 01:43:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:52.776 01:43:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:52.776 01:43:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:52.776 01:43:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:52.776 01:43:38 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:52.776 01:43:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:52.776 01:43:38 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:52.776 01:43:38 -- common/autotest_common.sh@10 -- # set +x 00:08:52.776 01:43:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:52.776 01:43:38 -- target/discovery.sh@49 -- # check_bdevs= 00:08:52.776 01:43:38 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:52.776 01:43:38 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:52.776 01:43:38 -- target/discovery.sh@57 -- # nvmftestfini 00:08:52.776 01:43:38 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:52.776 01:43:38 -- nvmf/common.sh@116 -- # sync 00:08:52.776 01:43:38 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:52.776 01:43:38 -- nvmf/common.sh@119 -- # set +e 00:08:52.776 01:43:38 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:52.776 01:43:38 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:52.776 rmmod nvme_tcp 00:08:52.776 rmmod nvme_fabrics 00:08:52.776 rmmod nvme_keyring 00:08:53.034 01:43:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:53.034 01:43:38 -- nvmf/common.sh@123 -- # set -e 00:08:53.034 01:43:38 -- nvmf/common.sh@124 -- # return 0 00:08:53.034 01:43:38 -- nvmf/common.sh@477 -- # '[' -n 2065740 ']' 00:08:53.034 01:43:38 -- nvmf/common.sh@478 -- # killprocess 2065740 00:08:53.034 01:43:38 -- common/autotest_common.sh@926 -- # '[' -z 2065740 ']' 00:08:53.034 01:43:38 -- common/autotest_common.sh@930 -- # kill -0 2065740 00:08:53.034 01:43:38 -- common/autotest_common.sh@931 -- # uname 00:08:53.034 01:43:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:53.034 01:43:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2065740 00:08:53.034 01:43:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:53.034 01:43:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:53.034 01:43:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2065740' 00:08:53.034 killing process with pid 2065740 00:08:53.034 01:43:38 -- common/autotest_common.sh@945 -- # kill 2065740 00:08:53.034 [2024-04-15 01:43:38.462997] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:53.034 01:43:38 -- common/autotest_common.sh@950 -- # wait 2065740 00:08:53.293 01:43:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:53.293 01:43:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:53.293 01:43:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:53.293 01:43:38 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:53.293 01:43:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:53.293 01:43:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.293 01:43:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:53.293 01:43:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.202 01:43:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:08:55.202 00:08:55.202 real 0m6.264s 00:08:55.202 user 0m7.506s 00:08:55.202 sys 0m1.972s 00:08:55.202 01:43:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:55.202 01:43:40 -- common/autotest_common.sh@10 -- # set +x 00:08:55.202 ************************************ 00:08:55.202 END TEST nvmf_discovery 00:08:55.202 ************************************ 00:08:55.202 01:43:40 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:55.202 01:43:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:55.202 01:43:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:55.202 01:43:40 -- common/autotest_common.sh@10 -- # set +x 00:08:55.202 ************************************ 00:08:55.202 START TEST nvmf_referrals 00:08:55.202 ************************************ 00:08:55.202 01:43:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:55.202 * Looking for test storage... 00:08:55.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:55.202 01:43:40 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:55.202 01:43:40 -- nvmf/common.sh@7 -- # uname -s 00:08:55.202 01:43:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:55.202 01:43:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:55.202 01:43:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:55.202 01:43:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:55.202 01:43:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:55.202 01:43:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:55.202 01:43:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:55.202 01:43:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:55.202 01:43:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:55.202 01:43:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:55.202 01:43:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.202 01:43:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:55.202 01:43:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:55.202 01:43:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:55.202 01:43:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:55.202 01:43:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:55.202 01:43:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:55.202 01:43:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:55.202 01:43:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:55.202 01:43:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.202 01:43:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.202 01:43:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.202 01:43:40 -- paths/export.sh@5 -- # export PATH 00:08:55.202 01:43:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:55.202 01:43:40 -- nvmf/common.sh@46 -- # : 0 00:08:55.202 01:43:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:55.202 01:43:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:55.202 01:43:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:55.202 01:43:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:55.202 01:43:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:55.202 01:43:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:55.202 01:43:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:55.202 01:43:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:55.202 01:43:40 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:55.202 01:43:40 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:55.202 01:43:40 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:55.202 01:43:40 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:55.202 01:43:40 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:55.202 01:43:40 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:55.202 01:43:40 -- target/referrals.sh@37 -- # nvmftestinit 00:08:55.202 01:43:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:55.202 01:43:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:55.202 01:43:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:55.202 01:43:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:55.202 01:43:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:55.203 01:43:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.203 01:43:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:55.203 01:43:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:55.203 01:43:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:55.203 01:43:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:55.203 01:43:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:55.203 01:43:40 -- common/autotest_common.sh@10 -- # set +x 00:08:57.736 01:43:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:57.736 01:43:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:57.736 01:43:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:57.736 01:43:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:57.736 01:43:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:57.736 01:43:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:57.736 01:43:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:57.736 01:43:42 -- nvmf/common.sh@294 -- # net_devs=() 00:08:57.736 01:43:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:57.736 01:43:42 -- nvmf/common.sh@295 -- # e810=() 00:08:57.736 01:43:42 -- nvmf/common.sh@295 -- # local -ga e810 00:08:57.736 01:43:42 -- nvmf/common.sh@296 -- # x722=() 00:08:57.736 01:43:42 -- nvmf/common.sh@296 -- # local -ga x722 00:08:57.736 01:43:42 -- nvmf/common.sh@297 -- # mlx=() 00:08:57.736 01:43:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:57.736 01:43:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:57.736 01:43:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:57.736 01:43:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:08:57.736 01:43:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.736 01:43:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:57.736 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:57.736 01:43:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:57.736 01:43:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:57.736 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:57.736 01:43:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.736 01:43:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.736 01:43:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.736 01:43:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:57.736 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:57.736 01:43:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.736 01:43:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:57.736 01:43:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:57.736 01:43:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:57.736 01:43:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:57.736 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:57.736 01:43:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:57.736 01:43:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:57.736 01:43:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:08:57.736 01:43:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:08:57.736 01:43:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:57.736 01:43:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:57.736 01:43:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:57.736 01:43:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:08:57.736 01:43:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:57.737 01:43:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:57.737 01:43:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:08:57.737 01:43:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:57.737 01:43:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:57.737 01:43:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:08:57.737 01:43:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:08:57.737 01:43:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:08:57.737 01:43:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:57.737 01:43:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:57.737 01:43:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:57.737 01:43:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:08:57.737 01:43:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:57.737 01:43:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:57.737 01:43:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:57.737 01:43:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:08:57.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:57.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:57.737 00:08:57.737 --- 10.0.0.2 ping statistics --- 00:08:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.737 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:57.737 01:43:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:57.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:57.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.112 ms 00:08:57.737 00:08:57.737 --- 10.0.0.1 ping statistics --- 00:08:57.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:57.737 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:08:57.737 01:43:43 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:57.737 01:43:43 -- nvmf/common.sh@410 -- # return 0 00:08:57.737 01:43:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:57.737 01:43:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:57.737 01:43:43 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:57.737 01:43:43 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:57.737 01:43:43 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:57.737 01:43:43 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:57.737 01:43:43 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:57.737 01:43:43 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:57.737 01:43:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:57.737 01:43:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:57.737 01:43:43 -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 01:43:43 -- nvmf/common.sh@469 -- # nvmfpid=2067862 00:08:57.737 01:43:43 -- nvmf/common.sh@470 -- # waitforlisten 2067862 00:08:57.737 01:43:43 -- common/autotest_common.sh@819 -- # '[' -z 2067862 ']' 00:08:57.737 01:43:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:57.737 01:43:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:57.737 01:43:43 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:57.737 01:43:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:57.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:57.737 01:43:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:57.737 01:43:43 -- common/autotest_common.sh@10 -- # set +x 00:08:57.737 [2024-04-15 01:43:43.077620] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:08:57.737 [2024-04-15 01:43:43.077711] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:57.737 EAL: No free 2048 kB hugepages reported on node 1 00:08:57.737 [2024-04-15 01:43:43.148903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:57.737 [2024-04-15 01:43:43.241209] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:57.737 [2024-04-15 01:43:43.241374] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:57.737 [2024-04-15 01:43:43.241391] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:57.737 [2024-04-15 01:43:43.241403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:57.737 [2024-04-15 01:43:43.241452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:57.737 [2024-04-15 01:43:43.241511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:57.737 [2024-04-15 01:43:43.241580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:57.737 [2024-04-15 01:43:43.241584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:58.670 01:43:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:58.670 01:43:44 -- common/autotest_common.sh@852 -- # return 0 00:08:58.670 01:43:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:58.670 01:43:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:58.670 01:43:44 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 [2024-04-15 01:43:44.060697] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 [2024-04-15 01:43:44.072874] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- target/referrals.sh@48 -- # jq length 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:58.670 01:43:44 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:58.670 01:43:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.670 01:43:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.670 01:43:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.670 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.670 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.670 01:43:44 -- target/referrals.sh@21 -- # sort 00:08:58.670 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:58.670 01:43:44 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:58.670 01:43:44 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:58.670 01:43:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.670 01:43:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.670 01:43:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.670 01:43:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.670 01:43:44 -- target/referrals.sh@26 -- # sort 00:08:58.928 01:43:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:58.928 01:43:44 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:58.928 01:43:44 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:58.928 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.928 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.928 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.928 01:43:44 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:58.928 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.928 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.928 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.928 01:43:44 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:58.928 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.928 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.928 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.928 01:43:44 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.928 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.928 01:43:44 -- target/referrals.sh@56 -- # jq length 00:08:58.928 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.928 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.928 01:43:44 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:58.928 01:43:44 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:58.928 01:43:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:58.928 01:43:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:58.929 01:43:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:58.929 01:43:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:58.929 01:43:44 -- target/referrals.sh@26 -- # sort 00:08:58.929 01:43:44 -- target/referrals.sh@26 -- # echo 00:08:58.929 01:43:44 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:58.929 01:43:44 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:58.929 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.929 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.929 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.929 01:43:44 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:58.929 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.929 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.929 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:58.929 01:43:44 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:58.929 01:43:44 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:58.929 01:43:44 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:58.929 01:43:44 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:58.929 01:43:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:58.929 01:43:44 -- common/autotest_common.sh@10 -- # set +x 00:08:58.929 01:43:44 -- target/referrals.sh@21 -- # sort 00:08:58.929 01:43:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.186 01:43:44 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:59.186 01:43:44 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:59.186 01:43:44 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:59.186 01:43:44 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.186 01:43:44 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.186 01:43:44 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.186 01:43:44 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.186 01:43:44 -- target/referrals.sh@26 -- # sort 00:08:59.186 01:43:44 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:59.186 01:43:44 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:59.186 01:43:44 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:59.186 01:43:44 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:59.186 01:43:44 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:59.186 01:43:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.186 01:43:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:59.444 01:43:44 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:59.444 01:43:44 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:59.444 01:43:44 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:59.444 01:43:44 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:59.444 01:43:44 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.444 01:43:44 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.444 01:43:45 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.444 01:43:45 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:59.444 01:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.444 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.444 01:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.444 01:43:45 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:59.444 01:43:45 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:59.444 01:43:45 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.444 01:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.444 01:43:45 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:59.444 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.444 01:43:45 -- target/referrals.sh@21 -- # sort 00:08:59.444 01:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.702 01:43:45 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:59.702 01:43:45 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:59.702 01:43:45 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:59.702 01:43:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.702 01:43:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.702 01:43:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.702 01:43:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.702 01:43:45 -- target/referrals.sh@26 -- # sort 00:08:59.702 01:43:45 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:59.702 01:43:45 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:59.702 01:43:45 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:59.702 01:43:45 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:59.702 01:43:45 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:59.702 01:43:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.702 01:43:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:59.702 01:43:45 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:59.702 01:43:45 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:59.702 01:43:45 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:59.702 01:43:45 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:59.702 01:43:45 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.702 01:43:45 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:59.960 01:43:45 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:59.960 01:43:45 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:59.960 01:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.960 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 01:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.960 01:43:45 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:59.960 01:43:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:59.960 01:43:45 -- target/referrals.sh@82 -- # jq length 00:08:59.960 01:43:45 -- common/autotest_common.sh@10 -- # set +x 00:08:59.960 01:43:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:59.960 01:43:45 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:59.960 01:43:45 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:59.960 01:43:45 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:59.960 01:43:45 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:59.960 01:43:45 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:59.960 01:43:45 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:59.960 01:43:45 -- target/referrals.sh@26 -- # sort 00:08:59.960 01:43:45 -- target/referrals.sh@26 -- # echo 00:08:59.960 01:43:45 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:59.960 01:43:45 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:59.960 01:43:45 -- target/referrals.sh@86 -- # nvmftestfini 00:08:59.960 01:43:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:59.960 01:43:45 -- nvmf/common.sh@116 -- # sync 00:08:59.960 01:43:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:59.960 01:43:45 -- nvmf/common.sh@119 -- # set +e 00:08:59.960 01:43:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:59.960 01:43:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:59.960 rmmod nvme_tcp 00:08:59.960 rmmod nvme_fabrics 00:08:59.960 rmmod nvme_keyring 00:08:59.960 01:43:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:59.960 01:43:45 -- nvmf/common.sh@123 -- # set -e 00:08:59.960 01:43:45 -- nvmf/common.sh@124 -- # return 0 00:08:59.960 01:43:45 -- nvmf/common.sh@477 -- # '[' -n 2067862 ']' 00:08:59.960 01:43:45 -- nvmf/common.sh@478 -- # killprocess 2067862 00:08:59.960 01:43:45 -- common/autotest_common.sh@926 -- # '[' -z 2067862 ']' 00:08:59.960 01:43:45 -- common/autotest_common.sh@930 -- # kill -0 2067862 00:08:59.960 01:43:45 -- common/autotest_common.sh@931 -- # uname 00:08:59.960 01:43:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:59.960 01:43:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2067862 00:08:59.960 01:43:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:59.960 01:43:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:59.960 01:43:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2067862' 00:08:59.960 killing process with pid 2067862 00:08:59.960 01:43:45 -- common/autotest_common.sh@945 -- # kill 2067862 00:08:59.960 01:43:45 -- common/autotest_common.sh@950 -- # wait 2067862 00:09:00.220 01:43:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:00.220 01:43:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:00.220 01:43:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:00.220 01:43:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:00.220 01:43:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:00.220 01:43:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.220 01:43:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:00.220 01:43:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.761 01:43:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:09:02.761 00:09:02.761 real 0m7.056s 00:09:02.761 user 0m11.610s 00:09:02.761 sys 0m2.146s 00:09:02.761 01:43:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.761 01:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 ************************************ 00:09:02.761 END TEST nvmf_referrals 00:09:02.761 ************************************ 00:09:02.761 01:43:47 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:02.761 01:43:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:02.761 01:43:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.761 01:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:02.761 ************************************ 00:09:02.761 START TEST nvmf_connect_disconnect 00:09:02.761 ************************************ 00:09:02.761 01:43:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:09:02.761 * Looking for test storage... 00:09:02.761 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:02.761 01:43:47 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:02.761 01:43:47 -- nvmf/common.sh@7 -- # uname -s 00:09:02.761 01:43:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:02.761 01:43:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:02.761 01:43:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:02.761 01:43:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:02.761 01:43:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:02.761 01:43:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:02.761 01:43:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:02.761 01:43:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:02.761 01:43:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:02.761 01:43:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:02.761 01:43:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.761 01:43:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:02.761 01:43:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:02.761 01:43:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:02.761 01:43:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:02.761 01:43:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:02.761 01:43:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:02.761 01:43:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:02.761 01:43:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:02.761 01:43:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.761 01:43:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.761 01:43:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.761 01:43:47 -- paths/export.sh@5 -- # export PATH 00:09:02.761 01:43:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:02.761 01:43:47 -- nvmf/common.sh@46 -- # : 0 00:09:02.761 01:43:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:02.761 01:43:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:02.761 01:43:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:02.761 01:43:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:02.761 01:43:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:02.761 01:43:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:02.761 01:43:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:02.761 01:43:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:02.761 01:43:47 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:02.761 01:43:47 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:02.761 01:43:47 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:02.761 01:43:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:02.761 01:43:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:02.761 01:43:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:02.761 01:43:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:02.761 01:43:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:02.761 01:43:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:02.761 01:43:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:02.761 01:43:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:02.761 01:43:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:02.761 01:43:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:02.761 01:43:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:02.761 01:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:04.696 01:43:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:04.696 01:43:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:04.696 01:43:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:04.696 01:43:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:04.696 01:43:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:04.696 01:43:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:04.696 01:43:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:04.696 01:43:49 -- nvmf/common.sh@294 -- # net_devs=() 00:09:04.696 01:43:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:04.696 01:43:49 -- nvmf/common.sh@295 -- # e810=() 00:09:04.696 01:43:49 -- nvmf/common.sh@295 -- # local -ga e810 00:09:04.696 01:43:49 -- nvmf/common.sh@296 -- # x722=() 00:09:04.696 01:43:49 -- nvmf/common.sh@296 -- # local -ga x722 00:09:04.696 01:43:49 -- nvmf/common.sh@297 -- # mlx=() 00:09:04.696 01:43:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:04.696 01:43:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:04.696 01:43:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:04.696 01:43:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:09:04.696 01:43:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:09:04.696 01:43:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:09:04.696 01:43:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:09:04.696 01:43:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:04.696 01:43:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:04.696 01:43:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:04.696 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:04.696 01:43:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:04.696 01:43:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:04.696 01:43:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:04.697 01:43:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:04.697 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:04.697 01:43:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:04.697 01:43:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:04.697 01:43:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.697 01:43:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:04.697 01:43:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.697 01:43:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:04.697 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:04.697 01:43:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.697 01:43:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:04.697 01:43:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:04.697 01:43:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:04.697 01:43:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:04.697 01:43:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:04.697 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:04.697 01:43:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:04.697 01:43:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:04.697 01:43:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:04.697 01:43:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:09:04.697 01:43:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:04.697 01:43:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:04.697 01:43:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:04.697 01:43:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:09:04.697 01:43:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:04.697 01:43:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:04.697 01:43:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:09:04.697 01:43:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:04.697 01:43:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:04.697 01:43:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:09:04.697 01:43:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:09:04.697 01:43:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:09:04.697 01:43:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:04.697 01:43:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:04.697 01:43:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:04.697 01:43:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:09:04.697 01:43:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:04.697 01:43:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:04.697 01:43:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:04.697 01:43:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:09:04.697 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:04.697 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:09:04.697 00:09:04.697 --- 10.0.0.2 ping statistics --- 00:09:04.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.697 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:09:04.697 01:43:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:04.697 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:04.697 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:09:04.697 00:09:04.697 --- 10.0.0.1 ping statistics --- 00:09:04.697 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:04.697 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:09:04.697 01:43:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:04.697 01:43:49 -- nvmf/common.sh@410 -- # return 0 00:09:04.697 01:43:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:04.697 01:43:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:04.697 01:43:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:04.697 01:43:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:04.697 01:43:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:04.697 01:43:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:04.697 01:43:49 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:04.697 01:43:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:04.697 01:43:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:04.697 01:43:49 -- common/autotest_common.sh@10 -- # set +x 00:09:04.697 01:43:50 -- nvmf/common.sh@469 -- # nvmfpid=2070241 00:09:04.697 01:43:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.697 01:43:50 -- nvmf/common.sh@470 -- # waitforlisten 2070241 00:09:04.697 01:43:50 -- common/autotest_common.sh@819 -- # '[' -z 2070241 ']' 00:09:04.697 01:43:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.697 01:43:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:04.697 01:43:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.697 01:43:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:04.697 01:43:50 -- common/autotest_common.sh@10 -- # set +x 00:09:04.697 [2024-04-15 01:43:50.050767] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:09:04.697 [2024-04-15 01:43:50.050848] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.697 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.697 [2024-04-15 01:43:50.121785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.697 [2024-04-15 01:43:50.214830] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.697 [2024-04-15 01:43:50.214987] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.697 [2024-04-15 01:43:50.215007] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.697 [2024-04-15 01:43:50.215021] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.697 [2024-04-15 01:43:50.215110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.697 [2024-04-15 01:43:50.215156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.697 [2024-04-15 01:43:50.215207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.697 [2024-04-15 01:43:50.215210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.636 01:43:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:05.636 01:43:51 -- common/autotest_common.sh@852 -- # return 0 00:09:05.636 01:43:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:05.636 01:43:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 01:43:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:05.636 01:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 [2024-04-15 01:43:51.076786] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.636 01:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:05.636 01:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 01:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.636 01:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 01:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:05.636 01:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 01:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:05.636 01:43:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:05.636 01:43:51 -- common/autotest_common.sh@10 -- # set +x 00:09:05.636 [2024-04-15 01:43:51.128523] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:05.636 01:43:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:05.636 01:43:51 -- target/connect_disconnect.sh@34 -- # set +x 00:09:08.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.614 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.595 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.044 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.587 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.168 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:31.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:36.147 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:38.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.589 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:43.124 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:45.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.575 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.144 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:52.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.592 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:59.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.580 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.123 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.573 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.681 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.581 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.560 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:32.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:39.089 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:43.529 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:46.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:48.603 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.526 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:59.432 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:04.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.965 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:13.414 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.947 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.858 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.385 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:29.911 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.335 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.324 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.850 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:45.746 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.799 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.225 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.750 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:02.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.728 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.151 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:11.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.577 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:16.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:18.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:20.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.092 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:30.043 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.006 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.020 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.973 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.932 01:47:43 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:57.932 01:47:43 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:57.932 01:47:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:57.932 01:47:43 -- nvmf/common.sh@116 -- # sync 00:12:57.932 01:47:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:57.932 01:47:43 -- nvmf/common.sh@119 -- # set +e 00:12:57.932 01:47:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:57.932 01:47:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:57.932 rmmod nvme_tcp 00:12:57.932 rmmod nvme_fabrics 00:12:57.932 rmmod nvme_keyring 00:12:57.932 01:47:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:57.932 01:47:43 -- nvmf/common.sh@123 -- # set -e 00:12:57.932 01:47:43 -- nvmf/common.sh@124 -- # return 0 00:12:57.932 01:47:43 -- nvmf/common.sh@477 -- # '[' -n 2070241 ']' 00:12:57.932 01:47:43 -- nvmf/common.sh@478 -- # killprocess 2070241 00:12:57.932 01:47:43 -- common/autotest_common.sh@926 -- # '[' -z 2070241 ']' 00:12:57.932 01:47:43 -- common/autotest_common.sh@930 -- # kill -0 2070241 00:12:57.932 01:47:43 -- common/autotest_common.sh@931 -- # uname 00:12:57.932 01:47:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:57.932 01:47:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2070241 00:12:57.932 01:47:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:57.932 01:47:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:57.932 01:47:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2070241' 00:12:57.932 killing process with pid 2070241 00:12:57.932 01:47:43 -- common/autotest_common.sh@945 -- # kill 2070241 00:12:57.932 01:47:43 -- common/autotest_common.sh@950 -- # wait 2070241 00:12:58.501 01:47:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:58.501 01:47:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:58.501 01:47:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:58.501 01:47:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:58.501 01:47:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:58.501 01:47:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:58.501 01:47:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:58.501 01:47:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.407 01:47:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:00.407 00:13:00.407 real 3m58.041s 00:13:00.407 user 15m7.300s 00:13:00.407 sys 0m34.454s 00:13:00.407 01:47:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:00.407 01:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:00.407 ************************************ 00:13:00.407 END TEST nvmf_connect_disconnect 00:13:00.407 ************************************ 00:13:00.407 01:47:45 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:00.407 01:47:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:00.407 01:47:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:00.407 01:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:00.407 ************************************ 00:13:00.407 START TEST nvmf_multitarget 00:13:00.407 ************************************ 00:13:00.407 01:47:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:13:00.407 * Looking for test storage... 00:13:00.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:00.407 01:47:45 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:00.407 01:47:45 -- nvmf/common.sh@7 -- # uname -s 00:13:00.407 01:47:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:00.407 01:47:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:00.407 01:47:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:00.407 01:47:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:00.407 01:47:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:00.407 01:47:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:00.407 01:47:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:00.407 01:47:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:00.407 01:47:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:00.407 01:47:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:00.407 01:47:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.407 01:47:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:00.407 01:47:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:00.407 01:47:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:00.407 01:47:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:00.407 01:47:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:00.407 01:47:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:00.407 01:47:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:00.407 01:47:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:00.407 01:47:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.407 01:47:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.407 01:47:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.407 01:47:45 -- paths/export.sh@5 -- # export PATH 00:13:00.407 01:47:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:00.407 01:47:45 -- nvmf/common.sh@46 -- # : 0 00:13:00.407 01:47:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:00.407 01:47:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:00.407 01:47:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:00.407 01:47:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:00.407 01:47:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:00.407 01:47:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:00.407 01:47:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:00.407 01:47:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:00.407 01:47:45 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:00.407 01:47:45 -- target/multitarget.sh@15 -- # nvmftestinit 00:13:00.407 01:47:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:00.407 01:47:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:00.407 01:47:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:00.407 01:47:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:00.407 01:47:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:00.407 01:47:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:00.407 01:47:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:00.407 01:47:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:00.407 01:47:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:00.407 01:47:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:00.407 01:47:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:00.407 01:47:45 -- common/autotest_common.sh@10 -- # set +x 00:13:02.971 01:47:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:02.971 01:47:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:02.971 01:47:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:02.971 01:47:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:02.971 01:47:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:02.971 01:47:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:02.971 01:47:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:02.971 01:47:48 -- nvmf/common.sh@294 -- # net_devs=() 00:13:02.971 01:47:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:02.971 01:47:48 -- nvmf/common.sh@295 -- # e810=() 00:13:02.971 01:47:48 -- nvmf/common.sh@295 -- # local -ga e810 00:13:02.971 01:47:48 -- nvmf/common.sh@296 -- # x722=() 00:13:02.971 01:47:48 -- nvmf/common.sh@296 -- # local -ga x722 00:13:02.971 01:47:48 -- nvmf/common.sh@297 -- # mlx=() 00:13:02.971 01:47:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:02.971 01:47:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:02.971 01:47:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:02.971 01:47:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:02.971 01:47:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:02.971 01:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.971 01:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:02.971 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:02.971 01:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:02.971 01:47:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:02.971 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:02.971 01:47:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:02.971 01:47:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:02.972 01:47:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.972 01:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.972 01:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.972 01:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.972 01:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:02.972 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:02.972 01:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.972 01:47:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:02.972 01:47:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:02.972 01:47:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:02.972 01:47:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:02.972 01:47:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:02.972 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:02.972 01:47:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:02.972 01:47:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:02.972 01:47:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:02.972 01:47:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:02.972 01:47:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:02.972 01:47:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:02.972 01:47:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:02.972 01:47:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:02.972 01:47:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:02.972 01:47:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:02.972 01:47:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:02.972 01:47:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:02.972 01:47:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:02.972 01:47:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:02.972 01:47:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:02.972 01:47:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:02.972 01:47:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:02.972 01:47:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:02.972 01:47:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:02.972 01:47:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:02.972 01:47:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:02.972 01:47:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:02.972 01:47:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:02.972 01:47:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:02.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:13:02.972 00:13:02.972 --- 10.0.0.2 ping statistics --- 00:13:02.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.972 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:02.972 01:47:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:02.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:13:02.972 00:13:02.972 --- 10.0.0.1 ping statistics --- 00:13:02.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.972 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:13:02.972 01:47:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.972 01:47:48 -- nvmf/common.sh@410 -- # return 0 00:13:02.972 01:47:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:02.972 01:47:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.972 01:47:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:02.972 01:47:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.972 01:47:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:02.972 01:47:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:02.972 01:47:48 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:13:02.972 01:47:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:02.972 01:47:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:02.972 01:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 01:47:48 -- nvmf/common.sh@469 -- # nvmfpid=2102451 00:13:02.972 01:47:48 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:02.972 01:47:48 -- nvmf/common.sh@470 -- # waitforlisten 2102451 00:13:02.972 01:47:48 -- common/autotest_common.sh@819 -- # '[' -z 2102451 ']' 00:13:02.972 01:47:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.972 01:47:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:02.972 01:47:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.972 01:47:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:02.972 01:47:48 -- common/autotest_common.sh@10 -- # set +x 00:13:02.972 [2024-04-15 01:47:48.250197] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:02.972 [2024-04-15 01:47:48.250267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.972 EAL: No free 2048 kB hugepages reported on node 1 00:13:02.972 [2024-04-15 01:47:48.314558] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.972 [2024-04-15 01:47:48.397769] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:02.972 [2024-04-15 01:47:48.397930] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.972 [2024-04-15 01:47:48.397946] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.972 [2024-04-15 01:47:48.397959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.972 [2024-04-15 01:47:48.398017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:02.972 [2024-04-15 01:47:48.398076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:02.972 [2024-04-15 01:47:48.398140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.972 [2024-04-15 01:47:48.398144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.906 01:47:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:03.906 01:47:49 -- common/autotest_common.sh@852 -- # return 0 00:13:03.906 01:47:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:03.906 01:47:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:03.906 01:47:49 -- common/autotest_common.sh@10 -- # set +x 00:13:03.906 01:47:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.906 01:47:49 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:03.906 01:47:49 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:03.906 01:47:49 -- target/multitarget.sh@21 -- # jq length 00:13:03.906 01:47:49 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:13:03.906 01:47:49 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:13:03.906 "nvmf_tgt_1" 00:13:03.906 01:47:49 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:13:04.163 "nvmf_tgt_2" 00:13:04.163 01:47:49 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:04.163 01:47:49 -- target/multitarget.sh@28 -- # jq length 00:13:04.163 01:47:49 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:13:04.163 01:47:49 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:13:04.163 true 00:13:04.163 01:47:49 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:13:04.422 true 00:13:04.422 01:47:49 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:13:04.422 01:47:49 -- target/multitarget.sh@35 -- # jq length 00:13:04.422 01:47:50 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:13:04.422 01:47:50 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:13:04.422 01:47:50 -- target/multitarget.sh@41 -- # nvmftestfini 00:13:04.422 01:47:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:04.422 01:47:50 -- nvmf/common.sh@116 -- # sync 00:13:04.422 01:47:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:04.422 01:47:50 -- nvmf/common.sh@119 -- # set +e 00:13:04.422 01:47:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:04.422 01:47:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:04.422 rmmod nvme_tcp 00:13:04.422 rmmod nvme_fabrics 00:13:04.422 rmmod nvme_keyring 00:13:04.422 01:47:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:04.681 01:47:50 -- nvmf/common.sh@123 -- # set -e 00:13:04.681 01:47:50 -- nvmf/common.sh@124 -- # return 0 00:13:04.681 01:47:50 -- nvmf/common.sh@477 -- # '[' -n 2102451 ']' 00:13:04.681 01:47:50 -- nvmf/common.sh@478 -- # killprocess 2102451 00:13:04.681 01:47:50 -- common/autotest_common.sh@926 -- # '[' -z 2102451 ']' 00:13:04.681 01:47:50 -- common/autotest_common.sh@930 -- # kill -0 2102451 00:13:04.681 01:47:50 -- common/autotest_common.sh@931 -- # uname 00:13:04.681 01:47:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:04.681 01:47:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2102451 00:13:04.681 01:47:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:04.681 01:47:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:04.681 01:47:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2102451' 00:13:04.681 killing process with pid 2102451 00:13:04.681 01:47:50 -- common/autotest_common.sh@945 -- # kill 2102451 00:13:04.681 01:47:50 -- common/autotest_common.sh@950 -- # wait 2102451 00:13:04.681 01:47:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:04.681 01:47:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:04.681 01:47:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:04.681 01:47:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.681 01:47:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:04.681 01:47:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.681 01:47:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.681 01:47:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.214 01:47:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:07.214 00:13:07.214 real 0m6.455s 00:13:07.214 user 0m9.303s 00:13:07.214 sys 0m1.986s 00:13:07.214 01:47:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:07.214 01:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 ************************************ 00:13:07.214 END TEST nvmf_multitarget 00:13:07.214 ************************************ 00:13:07.214 01:47:52 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:07.214 01:47:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:07.214 01:47:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:07.214 01:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:07.214 ************************************ 00:13:07.214 START TEST nvmf_rpc 00:13:07.214 ************************************ 00:13:07.214 01:47:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:13:07.214 * Looking for test storage... 00:13:07.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:07.214 01:47:52 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.214 01:47:52 -- nvmf/common.sh@7 -- # uname -s 00:13:07.214 01:47:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.214 01:47:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.214 01:47:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.214 01:47:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.214 01:47:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.214 01:47:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.214 01:47:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.214 01:47:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.215 01:47:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.215 01:47:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.215 01:47:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.215 01:47:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:07.215 01:47:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.215 01:47:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.215 01:47:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:07.215 01:47:52 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:07.215 01:47:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.215 01:47:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.215 01:47:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.215 01:47:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.215 01:47:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.215 01:47:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.215 01:47:52 -- paths/export.sh@5 -- # export PATH 00:13:07.215 01:47:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.215 01:47:52 -- nvmf/common.sh@46 -- # : 0 00:13:07.215 01:47:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:07.215 01:47:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:07.215 01:47:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:07.215 01:47:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.215 01:47:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.215 01:47:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:07.215 01:47:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:07.215 01:47:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:07.215 01:47:52 -- target/rpc.sh@11 -- # loops=5 00:13:07.215 01:47:52 -- target/rpc.sh@23 -- # nvmftestinit 00:13:07.215 01:47:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:07.215 01:47:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.215 01:47:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:07.215 01:47:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:07.215 01:47:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:07.215 01:47:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.215 01:47:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.215 01:47:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.215 01:47:52 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:07.215 01:47:52 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:07.215 01:47:52 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:07.215 01:47:52 -- common/autotest_common.sh@10 -- # set +x 00:13:09.114 01:47:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:09.114 01:47:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:09.114 01:47:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:09.114 01:47:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:09.114 01:47:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:09.114 01:47:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:09.114 01:47:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:09.114 01:47:54 -- nvmf/common.sh@294 -- # net_devs=() 00:13:09.114 01:47:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:09.114 01:47:54 -- nvmf/common.sh@295 -- # e810=() 00:13:09.114 01:47:54 -- nvmf/common.sh@295 -- # local -ga e810 00:13:09.114 01:47:54 -- nvmf/common.sh@296 -- # x722=() 00:13:09.114 01:47:54 -- nvmf/common.sh@296 -- # local -ga x722 00:13:09.114 01:47:54 -- nvmf/common.sh@297 -- # mlx=() 00:13:09.114 01:47:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:09.114 01:47:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:09.114 01:47:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:09.114 01:47:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:09.114 01:47:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:09.114 01:47:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.114 01:47:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:09.114 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:09.114 01:47:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:09.114 01:47:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:09.114 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:09.114 01:47:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:09.114 01:47:54 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:09.114 01:47:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.115 01:47:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.115 01:47:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.115 01:47:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.115 01:47:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:09.115 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:09.115 01:47:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.115 01:47:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:09.115 01:47:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:09.115 01:47:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:09.115 01:47:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:09.115 01:47:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:09.115 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:09.115 01:47:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:09.115 01:47:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:09.115 01:47:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:09.115 01:47:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:09.115 01:47:54 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:09.115 01:47:54 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:09.115 01:47:54 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:09.115 01:47:54 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:09.115 01:47:54 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:09.115 01:47:54 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:09.115 01:47:54 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:09.115 01:47:54 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:09.115 01:47:54 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:09.115 01:47:54 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:09.115 01:47:54 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:09.115 01:47:54 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:09.115 01:47:54 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:09.115 01:47:54 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:09.115 01:47:54 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:09.115 01:47:54 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:09.115 01:47:54 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:09.115 01:47:54 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:09.115 01:47:54 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:09.115 01:47:54 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:09.115 01:47:54 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:09.115 01:47:54 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:09.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:09.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:13:09.115 00:13:09.115 --- 10.0.0.2 ping statistics --- 00:13:09.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.115 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:13:09.115 01:47:54 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:09.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:09.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:13:09.115 00:13:09.115 --- 10.0.0.1 ping statistics --- 00:13:09.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:09.115 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:13:09.115 01:47:54 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:09.115 01:47:54 -- nvmf/common.sh@410 -- # return 0 00:13:09.115 01:47:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:09.115 01:47:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:09.115 01:47:54 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:09.115 01:47:54 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:09.115 01:47:54 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:09.115 01:47:54 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:09.115 01:47:54 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:09.115 01:47:54 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:09.115 01:47:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:09.115 01:47:54 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:09.115 01:47:54 -- common/autotest_common.sh@10 -- # set +x 00:13:09.115 01:47:54 -- nvmf/common.sh@469 -- # nvmfpid=2104694 00:13:09.115 01:47:54 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:09.115 01:47:54 -- nvmf/common.sh@470 -- # waitforlisten 2104694 00:13:09.115 01:47:54 -- common/autotest_common.sh@819 -- # '[' -z 2104694 ']' 00:13:09.115 01:47:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.115 01:47:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:09.115 01:47:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.115 01:47:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:09.115 01:47:54 -- common/autotest_common.sh@10 -- # set +x 00:13:09.115 [2024-04-15 01:47:54.616657] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:09.115 [2024-04-15 01:47:54.616759] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.115 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.115 [2024-04-15 01:47:54.688738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.373 [2024-04-15 01:47:54.782510] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:09.373 [2024-04-15 01:47:54.782691] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.373 [2024-04-15 01:47:54.782710] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.373 [2024-04-15 01:47:54.782725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.373 [2024-04-15 01:47:54.782790] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.373 [2024-04-15 01:47:54.782854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.373 [2024-04-15 01:47:54.782904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.373 [2024-04-15 01:47:54.782906] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.938 01:47:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:09.938 01:47:55 -- common/autotest_common.sh@852 -- # return 0 00:13:09.938 01:47:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:09.938 01:47:55 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:09.938 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.196 01:47:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:10.196 01:47:55 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:10.196 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.196 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.196 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.196 01:47:55 -- target/rpc.sh@26 -- # stats='{ 00:13:10.196 "tick_rate": 2700000000, 00:13:10.196 "poll_groups": [ 00:13:10.196 { 00:13:10.196 "name": "nvmf_tgt_poll_group_0", 00:13:10.196 "admin_qpairs": 0, 00:13:10.196 "io_qpairs": 0, 00:13:10.196 "current_admin_qpairs": 0, 00:13:10.196 "current_io_qpairs": 0, 00:13:10.196 "pending_bdev_io": 0, 00:13:10.196 "completed_nvme_io": 0, 00:13:10.196 "transports": [] 00:13:10.196 }, 00:13:10.196 { 00:13:10.196 "name": "nvmf_tgt_poll_group_1", 00:13:10.196 "admin_qpairs": 0, 00:13:10.196 "io_qpairs": 0, 00:13:10.196 "current_admin_qpairs": 0, 00:13:10.196 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [] 00:13:10.197 }, 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_2", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [] 00:13:10.197 }, 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_3", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [] 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 }' 00:13:10.197 01:47:55 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:10.197 01:47:55 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:10.197 01:47:55 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:10.197 01:47:55 -- target/rpc.sh@15 -- # wc -l 00:13:10.197 01:47:55 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:10.197 01:47:55 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:10.197 01:47:55 -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:10.197 01:47:55 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:10.197 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.197 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.197 [2024-04-15 01:47:55.703928] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:10.197 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.197 01:47:55 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:10.197 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.197 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.197 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.197 01:47:55 -- target/rpc.sh@33 -- # stats='{ 00:13:10.197 "tick_rate": 2700000000, 00:13:10.197 "poll_groups": [ 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_0", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [ 00:13:10.197 { 00:13:10.197 "trtype": "TCP" 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 }, 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_1", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [ 00:13:10.197 { 00:13:10.197 "trtype": "TCP" 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 }, 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_2", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [ 00:13:10.197 { 00:13:10.197 "trtype": "TCP" 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 }, 00:13:10.197 { 00:13:10.197 "name": "nvmf_tgt_poll_group_3", 00:13:10.197 "admin_qpairs": 0, 00:13:10.197 "io_qpairs": 0, 00:13:10.197 "current_admin_qpairs": 0, 00:13:10.197 "current_io_qpairs": 0, 00:13:10.197 "pending_bdev_io": 0, 00:13:10.197 "completed_nvme_io": 0, 00:13:10.197 "transports": [ 00:13:10.197 { 00:13:10.197 "trtype": "TCP" 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 } 00:13:10.197 ] 00:13:10.197 }' 00:13:10.197 01:47:55 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.197 01:47:55 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:10.197 01:47:55 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:10.197 01:47:55 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:10.197 01:47:55 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:10.197 01:47:55 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:10.197 01:47:55 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:10.197 01:47:55 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:10.197 01:47:55 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:10.197 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.197 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.197 Malloc1 00:13:10.197 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.197 01:47:55 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:10.197 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.197 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.197 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.197 01:47:55 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:10.197 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.197 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.455 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.455 01:47:55 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:10.455 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.455 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.455 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.455 01:47:55 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:10.455 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.455 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.455 [2024-04-15 01:47:55.859614] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:10.455 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.455 01:47:55 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:10.455 01:47:55 -- common/autotest_common.sh@640 -- # local es=0 00:13:10.455 01:47:55 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:10.455 01:47:55 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:10.455 01:47:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:10.455 01:47:55 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:10.455 01:47:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:10.455 01:47:55 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:10.455 01:47:55 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:10.455 01:47:55 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:10.455 01:47:55 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:10.455 01:47:55 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:13:10.455 [2024-04-15 01:47:55.882193] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:10.455 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:10.455 could not add new controller: failed to write to nvme-fabrics device 00:13:10.455 01:47:55 -- common/autotest_common.sh@643 -- # es=1 00:13:10.455 01:47:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:10.455 01:47:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:10.455 01:47:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:10.455 01:47:55 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:10.455 01:47:55 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:10.455 01:47:55 -- common/autotest_common.sh@10 -- # set +x 00:13:10.455 01:47:55 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:10.455 01:47:55 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:11.020 01:47:56 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:11.020 01:47:56 -- common/autotest_common.sh@1177 -- # local i=0 00:13:11.020 01:47:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:11.020 01:47:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:11.020 01:47:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:12.918 01:47:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:12.918 01:47:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:12.918 01:47:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.918 01:47:58 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:12.918 01:47:58 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.918 01:47:58 -- common/autotest_common.sh@1187 -- # return 0 00:13:12.918 01:47:58 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:13.176 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.176 01:47:58 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:13.176 01:47:58 -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.176 01:47:58 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:13.176 01:47:58 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.176 01:47:58 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:13.176 01:47:58 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:13.176 01:47:58 -- common/autotest_common.sh@1210 -- # return 0 00:13:13.176 01:47:58 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:13.176 01:47:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.176 01:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.176 01:47:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.176 01:47:58 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.177 01:47:58 -- common/autotest_common.sh@640 -- # local es=0 00:13:13.177 01:47:58 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.177 01:47:58 -- common/autotest_common.sh@628 -- # local arg=nvme 00:13:13.177 01:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.177 01:47:58 -- common/autotest_common.sh@632 -- # type -t nvme 00:13:13.177 01:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.177 01:47:58 -- common/autotest_common.sh@634 -- # type -P nvme 00:13:13.177 01:47:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:13.177 01:47:58 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:13:13.177 01:47:58 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:13:13.177 01:47:58 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.177 [2024-04-15 01:47:58.674009] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:13:13.177 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:13.177 could not add new controller: failed to write to nvme-fabrics device 00:13:13.177 01:47:58 -- common/autotest_common.sh@643 -- # es=1 00:13:13.177 01:47:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:13.177 01:47:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:13.177 01:47:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:13.177 01:47:58 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:13.177 01:47:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:13.177 01:47:58 -- common/autotest_common.sh@10 -- # set +x 00:13:13.177 01:47:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:13.177 01:47:58 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.745 01:47:59 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.745 01:47:59 -- common/autotest_common.sh@1177 -- # local i=0 00:13:13.745 01:47:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.745 01:47:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:13.745 01:47:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:16.275 01:48:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:16.275 01:48:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:16.275 01:48:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:16.275 01:48:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:16.275 01:48:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:16.275 01:48:01 -- common/autotest_common.sh@1187 -- # return 0 00:13:16.275 01:48:01 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:16.275 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:16.275 01:48:01 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:16.275 01:48:01 -- common/autotest_common.sh@1198 -- # local i=0 00:13:16.275 01:48:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:16.275 01:48:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.275 01:48:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:16.275 01:48:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:16.275 01:48:01 -- common/autotest_common.sh@1210 -- # return 0 00:13:16.275 01:48:01 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:16.275 01:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.275 01:48:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 01:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.275 01:48:01 -- target/rpc.sh@81 -- # seq 1 5 00:13:16.275 01:48:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:16.275 01:48:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:16.275 01:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.275 01:48:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 01:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.275 01:48:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:16.275 01:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.275 01:48:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 [2024-04-15 01:48:01.448511] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:16.275 01:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.275 01:48:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:16.275 01:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.275 01:48:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 01:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.275 01:48:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:16.275 01:48:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:16.275 01:48:01 -- common/autotest_common.sh@10 -- # set +x 00:13:16.275 01:48:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:16.275 01:48:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:16.533 01:48:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:16.533 01:48:02 -- common/autotest_common.sh@1177 -- # local i=0 00:13:16.533 01:48:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:16.533 01:48:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:16.533 01:48:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:18.429 01:48:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:18.429 01:48:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:18.429 01:48:04 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:18.688 01:48:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:18.688 01:48:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:18.688 01:48:04 -- common/autotest_common.sh@1187 -- # return 0 00:13:18.688 01:48:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:18.688 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.688 01:48:04 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:18.688 01:48:04 -- common/autotest_common.sh@1198 -- # local i=0 00:13:18.689 01:48:04 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:18.689 01:48:04 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.689 01:48:04 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:18.689 01:48:04 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:18.689 01:48:04 -- common/autotest_common.sh@1210 -- # return 0 00:13:18.689 01:48:04 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:18.689 01:48:04 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 [2024-04-15 01:48:04.185676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:18.689 01:48:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:18.689 01:48:04 -- common/autotest_common.sh@10 -- # set +x 00:13:18.689 01:48:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:18.689 01:48:04 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:19.287 01:48:04 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:19.287 01:48:04 -- common/autotest_common.sh@1177 -- # local i=0 00:13:19.287 01:48:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:19.287 01:48:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:19.287 01:48:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:21.186 01:48:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:21.186 01:48:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:21.186 01:48:06 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:21.186 01:48:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:21.186 01:48:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:21.186 01:48:06 -- common/autotest_common.sh@1187 -- # return 0 00:13:21.186 01:48:06 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:21.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:21.444 01:48:06 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:21.444 01:48:06 -- common/autotest_common.sh@1198 -- # local i=0 00:13:21.444 01:48:06 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:21.444 01:48:06 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.444 01:48:06 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:21.444 01:48:06 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:21.444 01:48:06 -- common/autotest_common.sh@1210 -- # return 0 00:13:21.444 01:48:06 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:21.444 01:48:06 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 [2024-04-15 01:48:06.932457] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:21.444 01:48:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:21.444 01:48:06 -- common/autotest_common.sh@10 -- # set +x 00:13:21.444 01:48:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:21.444 01:48:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:22.010 01:48:07 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:22.010 01:48:07 -- common/autotest_common.sh@1177 -- # local i=0 00:13:22.010 01:48:07 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:22.010 01:48:07 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:22.010 01:48:07 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:24.538 01:48:09 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:24.538 01:48:09 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:24.538 01:48:09 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:24.538 01:48:09 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:24.538 01:48:09 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:24.538 01:48:09 -- common/autotest_common.sh@1187 -- # return 0 00:13:24.538 01:48:09 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:24.538 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.538 01:48:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:24.538 01:48:09 -- common/autotest_common.sh@1198 -- # local i=0 00:13:24.538 01:48:09 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:24.538 01:48:09 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.538 01:48:09 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:24.538 01:48:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:24.538 01:48:09 -- common/autotest_common.sh@1210 -- # return 0 00:13:24.538 01:48:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:24.538 01:48:09 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 [2024-04-15 01:48:09.794321] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:24.538 01:48:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:24.538 01:48:09 -- common/autotest_common.sh@10 -- # set +x 00:13:24.538 01:48:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:24.538 01:48:09 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:24.796 01:48:10 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:24.796 01:48:10 -- common/autotest_common.sh@1177 -- # local i=0 00:13:24.796 01:48:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:24.796 01:48:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:24.796 01:48:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:27.320 01:48:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:27.320 01:48:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:27.320 01:48:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:27.320 01:48:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:27.320 01:48:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:27.320 01:48:12 -- common/autotest_common.sh@1187 -- # return 0 00:13:27.320 01:48:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.320 01:48:12 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:27.320 01:48:12 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.320 01:48:12 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.320 01:48:12 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.320 01:48:12 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.320 01:48:12 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:27.320 01:48:12 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.320 01:48:12 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:27.320 01:48:12 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 [2024-04-15 01:48:12.520507] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:27.320 01:48:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.320 01:48:12 -- common/autotest_common.sh@10 -- # set +x 00:13:27.320 01:48:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.320 01:48:12 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:27.577 01:48:13 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:27.577 01:48:13 -- common/autotest_common.sh@1177 -- # local i=0 00:13:27.577 01:48:13 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:27.577 01:48:13 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:27.577 01:48:13 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:30.102 01:48:15 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:30.102 01:48:15 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:30.102 01:48:15 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:30.102 01:48:15 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:30.102 01:48:15 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:30.103 01:48:15 -- common/autotest_common.sh@1187 -- # return 0 00:13:30.103 01:48:15 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:30.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:30.103 01:48:15 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@1198 -- # local i=0 00:13:30.103 01:48:15 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:30.103 01:48:15 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:30.103 01:48:15 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@1210 -- # return 0 00:13:30.103 01:48:15 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # seq 1 5 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.103 01:48:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 [2024-04-15 01:48:15.338691] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.103 01:48:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 [2024-04-15 01:48:15.386782] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.103 01:48:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 [2024-04-15 01:48:15.434935] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.103 01:48:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 [2024-04-15 01:48:15.483121] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:30.103 01:48:15 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 [2024-04-15 01:48:15.531296] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.103 01:48:15 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.103 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.103 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.103 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.104 01:48:15 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:30.104 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.104 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.104 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.104 01:48:15 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:30.104 01:48:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:30.104 01:48:15 -- common/autotest_common.sh@10 -- # set +x 00:13:30.104 01:48:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:30.104 01:48:15 -- target/rpc.sh@110 -- # stats='{ 00:13:30.104 "tick_rate": 2700000000, 00:13:30.104 "poll_groups": [ 00:13:30.104 { 00:13:30.104 "name": "nvmf_tgt_poll_group_0", 00:13:30.104 "admin_qpairs": 2, 00:13:30.104 "io_qpairs": 84, 00:13:30.104 "current_admin_qpairs": 0, 00:13:30.104 "current_io_qpairs": 0, 00:13:30.104 "pending_bdev_io": 0, 00:13:30.104 "completed_nvme_io": 134, 00:13:30.104 "transports": [ 00:13:30.104 { 00:13:30.104 "trtype": "TCP" 00:13:30.104 } 00:13:30.104 ] 00:13:30.104 }, 00:13:30.104 { 00:13:30.104 "name": "nvmf_tgt_poll_group_1", 00:13:30.104 "admin_qpairs": 2, 00:13:30.104 "io_qpairs": 84, 00:13:30.104 "current_admin_qpairs": 0, 00:13:30.104 "current_io_qpairs": 0, 00:13:30.104 "pending_bdev_io": 0, 00:13:30.104 "completed_nvme_io": 183, 00:13:30.104 "transports": [ 00:13:30.104 { 00:13:30.104 "trtype": "TCP" 00:13:30.104 } 00:13:30.104 ] 00:13:30.104 }, 00:13:30.104 { 00:13:30.104 "name": "nvmf_tgt_poll_group_2", 00:13:30.104 "admin_qpairs": 1, 00:13:30.104 "io_qpairs": 84, 00:13:30.104 "current_admin_qpairs": 0, 00:13:30.104 "current_io_qpairs": 0, 00:13:30.104 "pending_bdev_io": 0, 00:13:30.104 "completed_nvme_io": 212, 00:13:30.104 "transports": [ 00:13:30.104 { 00:13:30.104 "trtype": "TCP" 00:13:30.104 } 00:13:30.104 ] 00:13:30.104 }, 00:13:30.104 { 00:13:30.104 "name": "nvmf_tgt_poll_group_3", 00:13:30.104 "admin_qpairs": 2, 00:13:30.104 "io_qpairs": 84, 00:13:30.104 "current_admin_qpairs": 0, 00:13:30.104 "current_io_qpairs": 0, 00:13:30.104 "pending_bdev_io": 0, 00:13:30.104 "completed_nvme_io": 157, 00:13:30.104 "transports": [ 00:13:30.104 { 00:13:30.104 "trtype": "TCP" 00:13:30.104 } 00:13:30.104 ] 00:13:30.104 } 00:13:30.104 ] 00:13:30.104 }' 00:13:30.104 01:48:15 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.104 01:48:15 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:30.104 01:48:15 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:30.104 01:48:15 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:30.104 01:48:15 -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:30.104 01:48:15 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:30.104 01:48:15 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:30.104 01:48:15 -- target/rpc.sh@123 -- # nvmftestfini 00:13:30.104 01:48:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:30.104 01:48:15 -- nvmf/common.sh@116 -- # sync 00:13:30.104 01:48:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:30.104 01:48:15 -- nvmf/common.sh@119 -- # set +e 00:13:30.104 01:48:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:30.104 01:48:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:30.104 rmmod nvme_tcp 00:13:30.104 rmmod nvme_fabrics 00:13:30.104 rmmod nvme_keyring 00:13:30.104 01:48:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:30.104 01:48:15 -- nvmf/common.sh@123 -- # set -e 00:13:30.104 01:48:15 -- nvmf/common.sh@124 -- # return 0 00:13:30.104 01:48:15 -- nvmf/common.sh@477 -- # '[' -n 2104694 ']' 00:13:30.104 01:48:15 -- nvmf/common.sh@478 -- # killprocess 2104694 00:13:30.104 01:48:15 -- common/autotest_common.sh@926 -- # '[' -z 2104694 ']' 00:13:30.104 01:48:15 -- common/autotest_common.sh@930 -- # kill -0 2104694 00:13:30.104 01:48:15 -- common/autotest_common.sh@931 -- # uname 00:13:30.104 01:48:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:30.104 01:48:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2104694 00:13:30.104 01:48:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:30.104 01:48:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:30.104 01:48:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2104694' 00:13:30.104 killing process with pid 2104694 00:13:30.104 01:48:15 -- common/autotest_common.sh@945 -- # kill 2104694 00:13:30.104 01:48:15 -- common/autotest_common.sh@950 -- # wait 2104694 00:13:30.363 01:48:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:30.363 01:48:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:30.363 01:48:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:30.363 01:48:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:30.363 01:48:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:30.363 01:48:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:30.363 01:48:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:30.363 01:48:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.903 01:48:18 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:32.903 00:13:32.903 real 0m25.653s 00:13:32.903 user 1m24.113s 00:13:32.903 sys 0m4.103s 00:13:32.903 01:48:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:32.903 01:48:18 -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 ************************************ 00:13:32.903 END TEST nvmf_rpc 00:13:32.903 ************************************ 00:13:32.903 01:48:18 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:32.903 01:48:18 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:32.903 01:48:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:32.903 01:48:18 -- common/autotest_common.sh@10 -- # set +x 00:13:32.903 ************************************ 00:13:32.903 START TEST nvmf_invalid 00:13:32.903 ************************************ 00:13:32.903 01:48:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:32.903 * Looking for test storage... 00:13:32.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:32.903 01:48:18 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:32.903 01:48:18 -- nvmf/common.sh@7 -- # uname -s 00:13:32.903 01:48:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:32.903 01:48:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:32.903 01:48:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:32.903 01:48:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:32.903 01:48:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:32.903 01:48:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:32.903 01:48:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:32.903 01:48:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:32.903 01:48:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:32.903 01:48:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:32.903 01:48:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.903 01:48:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:32.903 01:48:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:32.903 01:48:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:32.903 01:48:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:32.903 01:48:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:32.903 01:48:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:32.903 01:48:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:32.903 01:48:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:32.903 01:48:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.903 01:48:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.903 01:48:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.903 01:48:18 -- paths/export.sh@5 -- # export PATH 00:13:32.903 01:48:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:32.903 01:48:18 -- nvmf/common.sh@46 -- # : 0 00:13:32.903 01:48:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:32.903 01:48:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:32.903 01:48:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:32.903 01:48:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:32.903 01:48:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:32.903 01:48:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:32.903 01:48:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:32.903 01:48:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:32.903 01:48:18 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:32.903 01:48:18 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:32.903 01:48:18 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:32.903 01:48:18 -- target/invalid.sh@14 -- # target=foobar 00:13:32.903 01:48:18 -- target/invalid.sh@16 -- # RANDOM=0 00:13:32.903 01:48:18 -- target/invalid.sh@34 -- # nvmftestinit 00:13:32.903 01:48:18 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:32.903 01:48:18 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:32.903 01:48:18 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:32.903 01:48:18 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:32.903 01:48:18 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:32.903 01:48:18 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:32.903 01:48:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:32.903 01:48:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:32.903 01:48:18 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:32.903 01:48:18 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:32.903 01:48:18 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:32.903 01:48:18 -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 01:48:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:34.807 01:48:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:34.807 01:48:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:34.807 01:48:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:34.807 01:48:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:34.807 01:48:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:34.807 01:48:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:34.807 01:48:20 -- nvmf/common.sh@294 -- # net_devs=() 00:13:34.807 01:48:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:34.807 01:48:20 -- nvmf/common.sh@295 -- # e810=() 00:13:34.807 01:48:20 -- nvmf/common.sh@295 -- # local -ga e810 00:13:34.807 01:48:20 -- nvmf/common.sh@296 -- # x722=() 00:13:34.807 01:48:20 -- nvmf/common.sh@296 -- # local -ga x722 00:13:34.807 01:48:20 -- nvmf/common.sh@297 -- # mlx=() 00:13:34.807 01:48:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:34.807 01:48:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:34.807 01:48:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.807 01:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:34.807 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:34.807 01:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:34.807 01:48:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:34.807 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:34.807 01:48:20 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.807 01:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.807 01:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.807 01:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:34.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:34.807 01:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:34.807 01:48:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:34.807 01:48:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:34.807 01:48:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:34.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:34.807 01:48:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:34.807 01:48:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:34.807 01:48:20 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.807 01:48:20 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.807 01:48:20 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:34.807 01:48:20 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:34.807 01:48:20 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:34.807 01:48:20 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:34.807 01:48:20 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:34.807 01:48:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.807 01:48:20 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:34.807 01:48:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:34.807 01:48:20 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:34.807 01:48:20 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:34.807 01:48:20 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:34.807 01:48:20 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:34.807 01:48:20 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:34.807 01:48:20 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:34.807 01:48:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:34.807 01:48:20 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:34.807 01:48:20 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:34.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:34.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:13:34.807 00:13:34.807 --- 10.0.0.2 ping statistics --- 00:13:34.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.807 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:13:34.807 01:48:20 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:34.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:34.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:13:34.807 00:13:34.807 --- 10.0.0.1 ping statistics --- 00:13:34.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:34.807 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:13:34.807 01:48:20 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:34.807 01:48:20 -- nvmf/common.sh@410 -- # return 0 00:13:34.807 01:48:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:34.807 01:48:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:34.807 01:48:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:34.807 01:48:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:34.807 01:48:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:34.807 01:48:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:34.807 01:48:20 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:34.807 01:48:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:34.807 01:48:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:34.807 01:48:20 -- common/autotest_common.sh@10 -- # set +x 00:13:34.807 01:48:20 -- nvmf/common.sh@469 -- # nvmfpid=2109972 00:13:34.807 01:48:20 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:34.807 01:48:20 -- nvmf/common.sh@470 -- # waitforlisten 2109972 00:13:34.807 01:48:20 -- common/autotest_common.sh@819 -- # '[' -z 2109972 ']' 00:13:34.807 01:48:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:34.807 01:48:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:34.808 01:48:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:34.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:34.808 01:48:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:34.808 01:48:20 -- common/autotest_common.sh@10 -- # set +x 00:13:34.808 [2024-04-15 01:48:20.379262] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:34.808 [2024-04-15 01:48:20.379353] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:34.808 EAL: No free 2048 kB hugepages reported on node 1 00:13:34.808 [2024-04-15 01:48:20.447994] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:35.066 [2024-04-15 01:48:20.542635] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.066 [2024-04-15 01:48:20.542819] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.066 [2024-04-15 01:48:20.542839] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.066 [2024-04-15 01:48:20.542854] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.066 [2024-04-15 01:48:20.542923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:35.066 [2024-04-15 01:48:20.542980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.066 [2024-04-15 01:48:20.543053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:35.066 [2024-04-15 01:48:20.543044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.999 01:48:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:35.999 01:48:21 -- common/autotest_common.sh@852 -- # return 0 00:13:35.999 01:48:21 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:35.999 01:48:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:35.999 01:48:21 -- common/autotest_common.sh@10 -- # set +x 00:13:35.999 01:48:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:35.999 01:48:21 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:35.999 01:48:21 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3450 00:13:35.999 [2024-04-15 01:48:21.565292] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:35.999 01:48:21 -- target/invalid.sh@40 -- # out='request: 00:13:35.999 { 00:13:35.999 "nqn": "nqn.2016-06.io.spdk:cnode3450", 00:13:35.999 "tgt_name": "foobar", 00:13:35.999 "method": "nvmf_create_subsystem", 00:13:35.999 "req_id": 1 00:13:35.999 } 00:13:35.999 Got JSON-RPC error response 00:13:35.999 response: 00:13:35.999 { 00:13:35.999 "code": -32603, 00:13:35.999 "message": "Unable to find target foobar" 00:13:35.999 }' 00:13:35.999 01:48:21 -- target/invalid.sh@41 -- # [[ request: 00:13:35.999 { 00:13:35.999 "nqn": "nqn.2016-06.io.spdk:cnode3450", 00:13:35.999 "tgt_name": "foobar", 00:13:35.999 "method": "nvmf_create_subsystem", 00:13:35.999 "req_id": 1 00:13:35.999 } 00:13:35.999 Got JSON-RPC error response 00:13:35.999 response: 00:13:35.999 { 00:13:35.999 "code": -32603, 00:13:35.999 "message": "Unable to find target foobar" 00:13:35.999 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:35.999 01:48:21 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:35.999 01:48:21 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode16256 00:13:36.257 [2024-04-15 01:48:21.806117] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16256: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:36.257 01:48:21 -- target/invalid.sh@45 -- # out='request: 00:13:36.257 { 00:13:36.257 "nqn": "nqn.2016-06.io.spdk:cnode16256", 00:13:36.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:36.257 "method": "nvmf_create_subsystem", 00:13:36.257 "req_id": 1 00:13:36.257 } 00:13:36.257 Got JSON-RPC error response 00:13:36.257 response: 00:13:36.257 { 00:13:36.257 "code": -32602, 00:13:36.257 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:36.257 }' 00:13:36.257 01:48:21 -- target/invalid.sh@46 -- # [[ request: 00:13:36.257 { 00:13:36.257 "nqn": "nqn.2016-06.io.spdk:cnode16256", 00:13:36.257 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:36.257 "method": "nvmf_create_subsystem", 00:13:36.257 "req_id": 1 00:13:36.258 } 00:13:36.258 Got JSON-RPC error response 00:13:36.258 response: 00:13:36.258 { 00:13:36.258 "code": -32602, 00:13:36.258 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:36.258 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:36.258 01:48:21 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:36.258 01:48:21 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode2166 00:13:36.516 [2024-04-15 01:48:22.042834] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2166: invalid model number 'SPDK_Controller' 00:13:36.516 01:48:22 -- target/invalid.sh@50 -- # out='request: 00:13:36.516 { 00:13:36.516 "nqn": "nqn.2016-06.io.spdk:cnode2166", 00:13:36.516 "model_number": "SPDK_Controller\u001f", 00:13:36.516 "method": "nvmf_create_subsystem", 00:13:36.516 "req_id": 1 00:13:36.516 } 00:13:36.516 Got JSON-RPC error response 00:13:36.516 response: 00:13:36.516 { 00:13:36.516 "code": -32602, 00:13:36.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.516 }' 00:13:36.516 01:48:22 -- target/invalid.sh@51 -- # [[ request: 00:13:36.516 { 00:13:36.516 "nqn": "nqn.2016-06.io.spdk:cnode2166", 00:13:36.516 "model_number": "SPDK_Controller\u001f", 00:13:36.516 "method": "nvmf_create_subsystem", 00:13:36.516 "req_id": 1 00:13:36.516 } 00:13:36.516 Got JSON-RPC error response 00:13:36.516 response: 00:13:36.516 { 00:13:36.516 "code": -32602, 00:13:36.516 "message": "Invalid MN SPDK_Controller\u001f" 00:13:36.516 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:36.516 01:48:22 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:36.516 01:48:22 -- target/invalid.sh@19 -- # local length=21 ll 00:13:36.516 01:48:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.516 01:48:22 -- target/invalid.sh@21 -- # local chars 00:13:36.516 01:48:22 -- target/invalid.sh@22 -- # local string 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 105 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=i 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 124 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+='|' 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 66 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=B 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 93 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=']' 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 91 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+='[' 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 53 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=5 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 86 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=V 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 87 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=W 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 70 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=F 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 46 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=. 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 54 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=6 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 114 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=r 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 105 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x69' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=i 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 116 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=t 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 90 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=Z 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 102 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=f 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 74 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # string+=J 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.516 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.516 01:48:22 -- target/invalid.sh@25 -- # printf %x 49 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # string+=1 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # printf %x 88 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # string+=X 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # printf %x 32 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # string+=' ' 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # printf %x 75 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:36.517 01:48:22 -- target/invalid.sh@25 -- # string+=K 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.517 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.517 01:48:22 -- target/invalid.sh@28 -- # [[ i == \- ]] 00:13:36.517 01:48:22 -- target/invalid.sh@31 -- # echo 'i|B][5VWF.6ritZfJ1X K' 00:13:36.517 01:48:22 -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'i|B][5VWF.6ritZfJ1X K' nqn.2016-06.io.spdk:cnode3065 00:13:36.776 [2024-04-15 01:48:22.371907] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3065: invalid serial number 'i|B][5VWF.6ritZfJ1X K' 00:13:36.777 01:48:22 -- target/invalid.sh@54 -- # out='request: 00:13:36.777 { 00:13:36.777 "nqn": "nqn.2016-06.io.spdk:cnode3065", 00:13:36.777 "serial_number": "i|B][5VWF.6ritZfJ1X K", 00:13:36.777 "method": "nvmf_create_subsystem", 00:13:36.777 "req_id": 1 00:13:36.777 } 00:13:36.777 Got JSON-RPC error response 00:13:36.777 response: 00:13:36.777 { 00:13:36.777 "code": -32602, 00:13:36.777 "message": "Invalid SN i|B][5VWF.6ritZfJ1X K" 00:13:36.777 }' 00:13:36.777 01:48:22 -- target/invalid.sh@55 -- # [[ request: 00:13:36.777 { 00:13:36.777 "nqn": "nqn.2016-06.io.spdk:cnode3065", 00:13:36.777 "serial_number": "i|B][5VWF.6ritZfJ1X K", 00:13:36.777 "method": "nvmf_create_subsystem", 00:13:36.777 "req_id": 1 00:13:36.777 } 00:13:36.777 Got JSON-RPC error response 00:13:36.777 response: 00:13:36.777 { 00:13:36.777 "code": -32602, 00:13:36.777 "message": "Invalid SN i|B][5VWF.6ritZfJ1X K" 00:13:36.777 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:36.777 01:48:22 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:36.777 01:48:22 -- target/invalid.sh@19 -- # local length=41 ll 00:13:36.777 01:48:22 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:36.777 01:48:22 -- target/invalid.sh@21 -- # local chars 00:13:36.777 01:48:22 -- target/invalid.sh@22 -- # local string 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 65 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=A 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=o 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 42 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+='*' 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 88 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=X 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 47 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=/ 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 86 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=V 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 54 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=6 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 95 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x5f' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=_ 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 71 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=G 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 79 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # string+=O 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:36.777 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # printf %x 49 00:13:36.777 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x31' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=1 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 44 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=, 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 47 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=/ 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 62 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+='>' 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 75 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=K 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 35 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+='#' 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 58 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=: 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 47 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=/ 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 35 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+='#' 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 37 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x25' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=% 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 71 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=G 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 66 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=B 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 121 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=y 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 94 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+='^' 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 97 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=a 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 109 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=m 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 62 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+='>' 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 121 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=y 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 121 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=y 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 46 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=. 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 65 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # string+=A 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.069 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.069 01:48:22 -- target/invalid.sh@25 -- # printf %x 104 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=h 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=o 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 45 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=- 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 60 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+='<' 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 86 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=V 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 111 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=o 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 108 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=l 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 46 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=. 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 54 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x36' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=6 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # printf %x 51 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:37.070 01:48:22 -- target/invalid.sh@25 -- # string+=3 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:37.070 01:48:22 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:37.070 01:48:22 -- target/invalid.sh@28 -- # [[ A == \- ]] 00:13:37.070 01:48:22 -- target/invalid.sh@31 -- # echo 'Ao*X/V6_GO1,/>K#:/#%GBy^am>yy.Aho-K#:/#%GBy^am>yy.Aho-K#:/#%GBy^am>yy.Aho-K#:/#%GBy^am>yy.Aho-K#:/#%GBy^am>yy.Aho- /dev/null' 00:13:39.651 01:48:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.560 01:48:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:41.560 00:13:41.560 real 0m9.078s 00:13:41.560 user 0m21.912s 00:13:41.560 sys 0m2.433s 00:13:41.560 01:48:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:41.560 01:48:27 -- common/autotest_common.sh@10 -- # set +x 00:13:41.560 ************************************ 00:13:41.560 END TEST nvmf_invalid 00:13:41.560 ************************************ 00:13:41.560 01:48:27 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:41.560 01:48:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:41.560 01:48:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:41.560 01:48:27 -- common/autotest_common.sh@10 -- # set +x 00:13:41.560 ************************************ 00:13:41.560 START TEST nvmf_abort 00:13:41.560 ************************************ 00:13:41.560 01:48:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:41.818 * Looking for test storage... 00:13:41.818 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:41.818 01:48:27 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:41.818 01:48:27 -- nvmf/common.sh@7 -- # uname -s 00:13:41.818 01:48:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:41.818 01:48:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:41.818 01:48:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:41.818 01:48:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:41.818 01:48:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:41.818 01:48:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:41.818 01:48:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:41.818 01:48:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:41.818 01:48:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:41.818 01:48:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:41.818 01:48:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.818 01:48:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:41.818 01:48:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:41.818 01:48:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:41.818 01:48:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:41.818 01:48:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:41.818 01:48:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:41.818 01:48:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:41.818 01:48:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:41.818 01:48:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.818 01:48:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.818 01:48:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.818 01:48:27 -- paths/export.sh@5 -- # export PATH 00:13:41.818 01:48:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:41.818 01:48:27 -- nvmf/common.sh@46 -- # : 0 00:13:41.818 01:48:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:41.818 01:48:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:41.818 01:48:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:41.818 01:48:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:41.818 01:48:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:41.818 01:48:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:41.818 01:48:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:41.818 01:48:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:41.818 01:48:27 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:41.818 01:48:27 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:41.818 01:48:27 -- target/abort.sh@14 -- # nvmftestinit 00:13:41.818 01:48:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:41.818 01:48:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:41.818 01:48:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:41.818 01:48:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:41.818 01:48:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:41.818 01:48:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:41.818 01:48:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:41.818 01:48:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:41.818 01:48:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:41.818 01:48:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:41.818 01:48:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:41.818 01:48:27 -- common/autotest_common.sh@10 -- # set +x 00:13:43.720 01:48:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:43.720 01:48:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:43.720 01:48:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:43.720 01:48:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:43.720 01:48:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:43.720 01:48:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:43.720 01:48:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:43.720 01:48:29 -- nvmf/common.sh@294 -- # net_devs=() 00:13:43.720 01:48:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:43.720 01:48:29 -- nvmf/common.sh@295 -- # e810=() 00:13:43.720 01:48:29 -- nvmf/common.sh@295 -- # local -ga e810 00:13:43.720 01:48:29 -- nvmf/common.sh@296 -- # x722=() 00:13:43.720 01:48:29 -- nvmf/common.sh@296 -- # local -ga x722 00:13:43.720 01:48:29 -- nvmf/common.sh@297 -- # mlx=() 00:13:43.720 01:48:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:43.720 01:48:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.720 01:48:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:43.720 01:48:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:43.720 01:48:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:43.720 01:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.720 01:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:43.720 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:43.720 01:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.720 01:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:43.980 01:48:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:43.980 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:43.980 01:48:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:43.980 01:48:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.980 01:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.980 01:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.980 01:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.980 01:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:43.980 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:43.980 01:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.980 01:48:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:43.980 01:48:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.980 01:48:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:43.980 01:48:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.980 01:48:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:43.980 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:43.980 01:48:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.980 01:48:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:43.980 01:48:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:43.980 01:48:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:43.980 01:48:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.980 01:48:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.980 01:48:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:43.980 01:48:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:43.980 01:48:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:43.980 01:48:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:43.980 01:48:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:43.980 01:48:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:43.980 01:48:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.980 01:48:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:43.980 01:48:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:43.980 01:48:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:43.980 01:48:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:43.980 01:48:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:43.980 01:48:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:43.980 01:48:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:43.980 01:48:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:43.980 01:48:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:43.980 01:48:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:43.980 01:48:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:43.980 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.980 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:13:43.980 00:13:43.980 --- 10.0.0.2 ping statistics --- 00:13:43.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.980 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:13:43.980 01:48:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:43.980 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.980 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:13:43.980 00:13:43.980 --- 10.0.0.1 ping statistics --- 00:13:43.980 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.980 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:13:43.980 01:48:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.980 01:48:29 -- nvmf/common.sh@410 -- # return 0 00:13:43.980 01:48:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:43.980 01:48:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.980 01:48:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:43.980 01:48:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.980 01:48:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:43.980 01:48:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:43.980 01:48:29 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:43.981 01:48:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:43.981 01:48:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:43.981 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:13:43.981 01:48:29 -- nvmf/common.sh@469 -- # nvmfpid=2112696 00:13:43.981 01:48:29 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:43.981 01:48:29 -- nvmf/common.sh@470 -- # waitforlisten 2112696 00:13:43.981 01:48:29 -- common/autotest_common.sh@819 -- # '[' -z 2112696 ']' 00:13:43.981 01:48:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.981 01:48:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:43.981 01:48:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.981 01:48:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:43.981 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:13:43.981 [2024-04-15 01:48:29.591506] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:43.981 [2024-04-15 01:48:29.591608] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.981 EAL: No free 2048 kB hugepages reported on node 1 00:13:44.240 [2024-04-15 01:48:29.661819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:44.240 [2024-04-15 01:48:29.755172] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:44.240 [2024-04-15 01:48:29.755353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:44.240 [2024-04-15 01:48:29.755374] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:44.240 [2024-04-15 01:48:29.755389] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:44.240 [2024-04-15 01:48:29.755476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:44.240 [2024-04-15 01:48:29.755533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:44.240 [2024-04-15 01:48:29.755537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.174 01:48:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:45.174 01:48:30 -- common/autotest_common.sh@852 -- # return 0 00:13:45.174 01:48:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:45.174 01:48:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:45.174 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.174 01:48:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.174 01:48:30 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:45.174 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.174 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.174 [2024-04-15 01:48:30.564209] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.174 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.174 01:48:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:45.174 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.174 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.174 Malloc0 00:13:45.174 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.174 01:48:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:45.174 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.175 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 Delay0 00:13:45.175 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.175 01:48:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:45.175 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.175 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.175 01:48:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:45.175 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.175 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.175 01:48:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:45.175 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.175 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 [2024-04-15 01:48:30.630232] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.175 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.175 01:48:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:45.175 01:48:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.175 01:48:30 -- common/autotest_common.sh@10 -- # set +x 00:13:45.175 01:48:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.175 01:48:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:45.175 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.175 [2024-04-15 01:48:30.687061] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:47.704 Initializing NVMe Controllers 00:13:47.704 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:47.704 controller IO queue size 128 less than required 00:13:47.704 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:47.704 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:47.704 Initialization complete. Launching workers. 00:13:47.704 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 32891 00:13:47.704 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 32952, failed to submit 62 00:13:47.704 success 32891, unsuccess 61, failed 0 00:13:47.704 01:48:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:47.704 01:48:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:47.704 01:48:32 -- common/autotest_common.sh@10 -- # set +x 00:13:47.704 01:48:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:47.704 01:48:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:47.704 01:48:32 -- target/abort.sh@38 -- # nvmftestfini 00:13:47.704 01:48:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:47.704 01:48:32 -- nvmf/common.sh@116 -- # sync 00:13:47.704 01:48:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:47.704 01:48:32 -- nvmf/common.sh@119 -- # set +e 00:13:47.704 01:48:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:47.704 01:48:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:47.704 rmmod nvme_tcp 00:13:47.704 rmmod nvme_fabrics 00:13:47.704 rmmod nvme_keyring 00:13:47.704 01:48:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:47.704 01:48:32 -- nvmf/common.sh@123 -- # set -e 00:13:47.704 01:48:32 -- nvmf/common.sh@124 -- # return 0 00:13:47.704 01:48:32 -- nvmf/common.sh@477 -- # '[' -n 2112696 ']' 00:13:47.704 01:48:32 -- nvmf/common.sh@478 -- # killprocess 2112696 00:13:47.704 01:48:32 -- common/autotest_common.sh@926 -- # '[' -z 2112696 ']' 00:13:47.704 01:48:32 -- common/autotest_common.sh@930 -- # kill -0 2112696 00:13:47.704 01:48:32 -- common/autotest_common.sh@931 -- # uname 00:13:47.704 01:48:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:47.704 01:48:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2112696 00:13:47.704 01:48:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:47.704 01:48:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:47.704 01:48:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2112696' 00:13:47.704 killing process with pid 2112696 00:13:47.704 01:48:32 -- common/autotest_common.sh@945 -- # kill 2112696 00:13:47.704 01:48:32 -- common/autotest_common.sh@950 -- # wait 2112696 00:13:47.704 01:48:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:47.704 01:48:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:47.704 01:48:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:47.704 01:48:33 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:47.704 01:48:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:47.704 01:48:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:47.704 01:48:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:47.704 01:48:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.610 01:48:35 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:13:49.610 00:13:49.610 real 0m7.939s 00:13:49.610 user 0m12.383s 00:13:49.610 sys 0m2.608s 00:13:49.610 01:48:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.610 01:48:35 -- common/autotest_common.sh@10 -- # set +x 00:13:49.610 ************************************ 00:13:49.610 END TEST nvmf_abort 00:13:49.610 ************************************ 00:13:49.610 01:48:35 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.610 01:48:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:49.610 01:48:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:49.610 01:48:35 -- common/autotest_common.sh@10 -- # set +x 00:13:49.610 ************************************ 00:13:49.610 START TEST nvmf_ns_hotplug_stress 00:13:49.610 ************************************ 00:13:49.610 01:48:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:49.610 * Looking for test storage... 00:13:49.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.610 01:48:35 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.610 01:48:35 -- nvmf/common.sh@7 -- # uname -s 00:13:49.610 01:48:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.610 01:48:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.610 01:48:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.610 01:48:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.610 01:48:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.610 01:48:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.610 01:48:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.610 01:48:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.610 01:48:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.610 01:48:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.610 01:48:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.610 01:48:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:49.610 01:48:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.610 01:48:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.610 01:48:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.610 01:48:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.610 01:48:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.610 01:48:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.610 01:48:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.610 01:48:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.610 01:48:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.610 01:48:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.610 01:48:35 -- paths/export.sh@5 -- # export PATH 00:13:49.610 01:48:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.610 01:48:35 -- nvmf/common.sh@46 -- # : 0 00:13:49.610 01:48:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:49.610 01:48:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:49.610 01:48:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:49.610 01:48:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.610 01:48:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.610 01:48:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:49.611 01:48:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:49.611 01:48:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:49.611 01:48:35 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:49.611 01:48:35 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:49.611 01:48:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:49.611 01:48:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:49.611 01:48:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:49.611 01:48:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:49.611 01:48:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:49.611 01:48:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:49.611 01:48:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:49.611 01:48:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:49.611 01:48:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:13:49.611 01:48:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:13:49.611 01:48:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:13:49.611 01:48:35 -- common/autotest_common.sh@10 -- # set +x 00:13:51.509 01:48:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:51.509 01:48:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:13:51.509 01:48:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:13:51.509 01:48:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:13:51.509 01:48:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:13:51.509 01:48:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:13:51.509 01:48:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:13:51.509 01:48:37 -- nvmf/common.sh@294 -- # net_devs=() 00:13:51.509 01:48:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:13:51.509 01:48:37 -- nvmf/common.sh@295 -- # e810=() 00:13:51.509 01:48:37 -- nvmf/common.sh@295 -- # local -ga e810 00:13:51.509 01:48:37 -- nvmf/common.sh@296 -- # x722=() 00:13:51.509 01:48:37 -- nvmf/common.sh@296 -- # local -ga x722 00:13:51.509 01:48:37 -- nvmf/common.sh@297 -- # mlx=() 00:13:51.509 01:48:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:13:51.509 01:48:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:51.509 01:48:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:13:51.509 01:48:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:13:51.509 01:48:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:13:51.509 01:48:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.509 01:48:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:51.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:51.509 01:48:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:13:51.509 01:48:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:51.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:51.509 01:48:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:13:51.509 01:48:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:13:51.509 01:48:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.509 01:48:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.509 01:48:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.509 01:48:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.509 01:48:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:51.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:51.509 01:48:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.509 01:48:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:13:51.509 01:48:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:51.509 01:48:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:13:51.510 01:48:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:51.510 01:48:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:51.510 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:51.510 01:48:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:13:51.510 01:48:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:13:51.510 01:48:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:13:51.510 01:48:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:13:51.510 01:48:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:13:51.510 01:48:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:13:51.510 01:48:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:51.510 01:48:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:51.510 01:48:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:51.510 01:48:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:13:51.510 01:48:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:51.510 01:48:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:51.510 01:48:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:13:51.510 01:48:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:51.510 01:48:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:51.510 01:48:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:13:51.510 01:48:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:13:51.510 01:48:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:13:51.510 01:48:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:51.768 01:48:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:51.768 01:48:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:51.768 01:48:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:13:51.768 01:48:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:51.768 01:48:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:51.768 01:48:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:51.768 01:48:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:13:51.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:51.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:13:51.768 00:13:51.768 --- 10.0.0.2 ping statistics --- 00:13:51.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.768 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:13:51.768 01:48:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:51.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:51.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:13:51.768 00:13:51.768 --- 10.0.0.1 ping statistics --- 00:13:51.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:51.768 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:13:51.768 01:48:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:51.768 01:48:37 -- nvmf/common.sh@410 -- # return 0 00:13:51.768 01:48:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:51.768 01:48:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:51.768 01:48:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:51.768 01:48:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:51.768 01:48:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:51.768 01:48:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:51.768 01:48:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:51.768 01:48:37 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:51.768 01:48:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:51.768 01:48:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:51.768 01:48:37 -- common/autotest_common.sh@10 -- # set +x 00:13:51.768 01:48:37 -- nvmf/common.sh@469 -- # nvmfpid=2115069 00:13:51.768 01:48:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:51.768 01:48:37 -- nvmf/common.sh@470 -- # waitforlisten 2115069 00:13:51.768 01:48:37 -- common/autotest_common.sh@819 -- # '[' -z 2115069 ']' 00:13:51.768 01:48:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:51.768 01:48:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:51.768 01:48:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:51.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:51.768 01:48:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:51.768 01:48:37 -- common/autotest_common.sh@10 -- # set +x 00:13:51.768 [2024-04-15 01:48:37.327749] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:13:51.768 [2024-04-15 01:48:37.327836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.768 EAL: No free 2048 kB hugepages reported on node 1 00:13:51.768 [2024-04-15 01:48:37.392359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:52.026 [2024-04-15 01:48:37.478467] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:52.026 [2024-04-15 01:48:37.478615] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:52.026 [2024-04-15 01:48:37.478633] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:52.026 [2024-04-15 01:48:37.478646] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:52.026 [2024-04-15 01:48:37.478726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:52.026 [2024-04-15 01:48:37.478777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:52.026 [2024-04-15 01:48:37.478779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:52.959 01:48:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.959 01:48:38 -- common/autotest_common.sh@852 -- # return 0 00:13:52.959 01:48:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.959 01:48:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.959 01:48:38 -- common/autotest_common.sh@10 -- # set +x 00:13:52.959 01:48:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.959 01:48:38 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:52.959 01:48:38 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:52.959 [2024-04-15 01:48:38.589717] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.236 01:48:38 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:53.236 01:48:38 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:53.505 [2024-04-15 01:48:39.084592] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:53.505 01:48:39 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:53.763 01:48:39 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:54.022 Malloc0 00:13:54.022 01:48:39 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:54.279 Delay0 00:13:54.279 01:48:39 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.537 01:48:40 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:54.795 NULL1 00:13:54.795 01:48:40 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:55.052 01:48:40 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2115499 00:13:55.052 01:48:40 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:55.053 01:48:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:13:55.053 01:48:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.053 EAL: No free 2048 kB hugepages reported on node 1 00:13:56.427 Read completed with error (sct=0, sc=11) 00:13:56.427 01:48:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:56.427 01:48:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:56.427 01:48:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:56.684 true 00:13:56.684 01:48:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:13:56.684 01:48:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.615 01:48:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.873 01:48:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:57.873 01:48:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:57.873 true 00:13:57.873 01:48:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:13:57.873 01:48:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.131 01:48:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:58.389 01:48:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:58.389 01:48:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:58.647 true 00:13:58.647 01:48:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:13:58.647 01:48:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:58.905 01:48:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:59.163 01:48:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:59.163 01:48:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:59.421 true 00:13:59.421 01:48:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:13:59.421 01:48:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:00.796 01:48:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:00.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:00.796 01:48:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:14:00.796 01:48:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:14:01.054 true 00:14:01.054 01:48:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:01.054 01:48:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:01.311 01:48:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:01.568 01:48:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:14:01.568 01:48:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:14:01.827 true 00:14:01.827 01:48:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:01.827 01:48:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.761 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:02.761 01:48:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.019 01:48:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:14:03.019 01:48:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:14:03.277 true 00:14:03.277 01:48:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:03.277 01:48:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:03.535 01:48:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:03.793 01:48:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:14:03.793 01:48:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:14:04.051 true 00:14:04.051 01:48:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:04.051 01:48:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:04.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.986 01:48:50 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:04.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:04.986 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:05.244 01:48:50 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:14:05.244 01:48:50 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:14:05.502 true 00:14:05.502 01:48:50 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:05.502 01:48:50 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:05.759 01:48:51 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:06.017 01:48:51 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:14:06.017 01:48:51 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:14:06.017 true 00:14:06.017 01:48:51 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:06.017 01:48:51 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.417 01:48:52 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:07.417 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:07.417 01:48:52 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:14:07.417 01:48:52 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:14:07.675 true 00:14:07.675 01:48:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:07.675 01:48:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:07.933 01:48:53 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:08.191 01:48:53 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:14:08.191 01:48:53 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:14:08.448 true 00:14:08.448 01:48:53 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:08.448 01:48:53 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.383 01:48:54 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:09.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:09.383 01:48:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:14:09.383 01:48:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:14:09.641 true 00:14:09.641 01:48:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:09.641 01:48:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:09.898 01:48:55 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:10.156 01:48:55 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:14:10.156 01:48:55 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:14:10.414 true 00:14:10.414 01:48:55 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:10.414 01:48:55 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.348 01:48:56 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:11.348 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:11.605 01:48:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:14:11.605 01:48:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:14:11.863 true 00:14:11.863 01:48:57 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:11.863 01:48:57 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:12.121 01:48:57 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:12.379 01:48:57 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:14:12.379 01:48:57 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:14:12.636 true 00:14:12.636 01:48:58 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:12.636 01:48:58 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:13.570 01:48:58 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:13.570 01:48:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:14:13.570 01:48:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:14:13.826 true 00:14:13.826 01:48:59 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:13.826 01:48:59 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:14.083 01:48:59 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:14.340 01:48:59 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:14:14.340 01:48:59 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:14:14.597 true 00:14:14.597 01:49:00 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:14.598 01:49:00 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:15.529 01:49:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:15.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.529 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:15.787 01:49:01 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:14:15.787 01:49:01 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:14:16.044 true 00:14:16.044 01:49:01 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:16.044 01:49:01 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:16.302 01:49:01 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:16.560 01:49:02 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:14:16.560 01:49:02 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:14:16.817 true 00:14:16.817 01:49:02 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:16.817 01:49:02 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:17.748 01:49:03 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.005 01:49:03 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:14:18.005 01:49:03 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:14:18.262 true 00:14:18.262 01:49:03 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:18.262 01:49:03 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:18.519 01:49:04 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:18.776 01:49:04 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:14:18.776 01:49:04 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:14:19.033 true 00:14:19.033 01:49:04 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:19.033 01:49:04 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.000 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:20.000 01:49:05 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:20.258 01:49:05 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:14:20.258 01:49:05 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:14:20.516 true 00:14:20.516 01:49:05 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:20.516 01:49:05 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:20.778 01:49:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.066 01:49:06 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:14:21.066 01:49:06 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:14:21.066 true 00:14:21.066 01:49:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:21.066 01:49:06 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:21.325 01:49:06 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:21.583 01:49:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:14:21.583 01:49:07 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:14:21.841 true 00:14:21.841 01:49:07 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:21.841 01:49:07 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.215 01:49:08 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:14:23.215 01:49:08 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:14:23.215 01:49:08 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:14:23.473 true 00:14:23.473 01:49:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:23.473 01:49:08 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.407 01:49:09 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:24.407 01:49:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:14:24.407 01:49:09 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:14:24.665 true 00:14:24.665 01:49:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:24.665 01:49:10 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:24.922 01:49:10 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:25.180 01:49:10 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:14:25.180 01:49:10 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:14:25.438 Initializing NVMe Controllers 00:14:25.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:25.438 Controller IO queue size 128, less than required. 00:14:25.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.438 Controller IO queue size 128, less than required. 00:14:25.438 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:25.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:25.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:25.438 Initialization complete. Launching workers. 00:14:25.438 ======================================================== 00:14:25.438 Latency(us) 00:14:25.438 Device Information : IOPS MiB/s Average min max 00:14:25.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1192.59 0.58 56982.99 1867.85 1014640.33 00:14:25.438 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11998.01 5.86 10669.71 2482.14 442444.99 00:14:25.438 ======================================================== 00:14:25.438 Total : 13190.60 6.44 14856.98 1867.85 1014640.33 00:14:25.438 00:14:25.438 true 00:14:25.438 01:49:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2115499 00:14:25.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2115499) - No such process 00:14:25.438 01:49:10 -- target/ns_hotplug_stress.sh@44 -- # wait 2115499 00:14:25.438 01:49:10 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:25.438 01:49:10 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:14:25.438 01:49:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:25.438 01:49:10 -- nvmf/common.sh@116 -- # sync 00:14:25.438 01:49:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:25.438 01:49:10 -- nvmf/common.sh@119 -- # set +e 00:14:25.438 01:49:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:25.438 01:49:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:25.438 rmmod nvme_tcp 00:14:25.438 rmmod nvme_fabrics 00:14:25.438 rmmod nvme_keyring 00:14:25.439 01:49:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:25.439 01:49:11 -- nvmf/common.sh@123 -- # set -e 00:14:25.439 01:49:11 -- nvmf/common.sh@124 -- # return 0 00:14:25.439 01:49:11 -- nvmf/common.sh@477 -- # '[' -n 2115069 ']' 00:14:25.439 01:49:11 -- nvmf/common.sh@478 -- # killprocess 2115069 00:14:25.439 01:49:11 -- common/autotest_common.sh@926 -- # '[' -z 2115069 ']' 00:14:25.439 01:49:11 -- common/autotest_common.sh@930 -- # kill -0 2115069 00:14:25.439 01:49:11 -- common/autotest_common.sh@931 -- # uname 00:14:25.439 01:49:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:25.439 01:49:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2115069 00:14:25.439 01:49:11 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:25.439 01:49:11 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:25.439 01:49:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2115069' 00:14:25.439 killing process with pid 2115069 00:14:25.439 01:49:11 -- common/autotest_common.sh@945 -- # kill 2115069 00:14:25.439 01:49:11 -- common/autotest_common.sh@950 -- # wait 2115069 00:14:25.698 01:49:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:25.698 01:49:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:25.698 01:49:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:25.698 01:49:11 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.698 01:49:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:25.698 01:49:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.698 01:49:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:25.698 01:49:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.234 01:49:13 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:28.234 00:14:28.234 real 0m38.204s 00:14:28.234 user 2m27.977s 00:14:28.234 sys 0m9.851s 00:14:28.234 01:49:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:28.234 01:49:13 -- common/autotest_common.sh@10 -- # set +x 00:14:28.234 ************************************ 00:14:28.234 END TEST nvmf_ns_hotplug_stress 00:14:28.234 ************************************ 00:14:28.234 01:49:13 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.234 01:49:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:28.234 01:49:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:28.234 01:49:13 -- common/autotest_common.sh@10 -- # set +x 00:14:28.234 ************************************ 00:14:28.234 START TEST nvmf_connect_stress 00:14:28.234 ************************************ 00:14:28.234 01:49:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:28.234 * Looking for test storage... 00:14:28.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.234 01:49:13 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.234 01:49:13 -- nvmf/common.sh@7 -- # uname -s 00:14:28.234 01:49:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.234 01:49:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.234 01:49:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.234 01:49:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.234 01:49:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.234 01:49:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.234 01:49:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.234 01:49:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.234 01:49:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.234 01:49:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.234 01:49:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.234 01:49:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:28.234 01:49:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.234 01:49:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.234 01:49:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.234 01:49:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.234 01:49:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.234 01:49:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.234 01:49:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.234 01:49:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.234 01:49:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.234 01:49:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.234 01:49:13 -- paths/export.sh@5 -- # export PATH 00:14:28.234 01:49:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.234 01:49:13 -- nvmf/common.sh@46 -- # : 0 00:14:28.234 01:49:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:28.234 01:49:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:28.234 01:49:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:28.234 01:49:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.234 01:49:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.234 01:49:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:28.234 01:49:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:28.234 01:49:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:28.234 01:49:13 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:28.234 01:49:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:28.234 01:49:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:28.234 01:49:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:28.234 01:49:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:28.235 01:49:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:28.235 01:49:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.235 01:49:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.235 01:49:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.235 01:49:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:28.235 01:49:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:28.235 01:49:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:28.235 01:49:13 -- common/autotest_common.sh@10 -- # set +x 00:14:30.137 01:49:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:30.137 01:49:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:30.137 01:49:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:30.137 01:49:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:30.137 01:49:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:30.137 01:49:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:30.137 01:49:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:30.137 01:49:15 -- nvmf/common.sh@294 -- # net_devs=() 00:14:30.137 01:49:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:30.137 01:49:15 -- nvmf/common.sh@295 -- # e810=() 00:14:30.137 01:49:15 -- nvmf/common.sh@295 -- # local -ga e810 00:14:30.137 01:49:15 -- nvmf/common.sh@296 -- # x722=() 00:14:30.137 01:49:15 -- nvmf/common.sh@296 -- # local -ga x722 00:14:30.137 01:49:15 -- nvmf/common.sh@297 -- # mlx=() 00:14:30.137 01:49:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:30.137 01:49:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.137 01:49:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:30.137 01:49:15 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:30.137 01:49:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:30.137 01:49:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.137 01:49:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:30.137 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:30.137 01:49:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.137 01:49:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:30.137 01:49:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:30.137 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:30.138 01:49:15 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:30.138 01:49:15 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.138 01:49:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.138 01:49:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.138 01:49:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.138 01:49:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:30.138 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:30.138 01:49:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.138 01:49:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:30.138 01:49:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.138 01:49:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:30.138 01:49:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.138 01:49:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:30.138 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:30.138 01:49:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.138 01:49:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:30.138 01:49:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:30.138 01:49:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:30.138 01:49:15 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.138 01:49:15 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.138 01:49:15 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:30.138 01:49:15 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:30.138 01:49:15 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:30.138 01:49:15 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:30.138 01:49:15 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:30.138 01:49:15 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:30.138 01:49:15 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.138 01:49:15 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:30.138 01:49:15 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:30.138 01:49:15 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:30.138 01:49:15 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:30.138 01:49:15 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:30.138 01:49:15 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:30.138 01:49:15 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:30.138 01:49:15 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:30.138 01:49:15 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:30.138 01:49:15 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:30.138 01:49:15 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:30.138 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.138 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:14:30.138 00:14:30.138 --- 10.0.0.2 ping statistics --- 00:14:30.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.138 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:14:30.138 01:49:15 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:30.138 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.138 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:14:30.138 00:14:30.138 --- 10.0.0.1 ping statistics --- 00:14:30.138 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.138 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:14:30.138 01:49:15 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.138 01:49:15 -- nvmf/common.sh@410 -- # return 0 00:14:30.138 01:49:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:30.138 01:49:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.138 01:49:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:30.138 01:49:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.138 01:49:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:30.138 01:49:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:30.138 01:49:15 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:30.138 01:49:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:30.138 01:49:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:30.138 01:49:15 -- common/autotest_common.sh@10 -- # set +x 00:14:30.138 01:49:15 -- nvmf/common.sh@469 -- # nvmfpid=2121205 00:14:30.138 01:49:15 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:30.138 01:49:15 -- nvmf/common.sh@470 -- # waitforlisten 2121205 00:14:30.138 01:49:15 -- common/autotest_common.sh@819 -- # '[' -z 2121205 ']' 00:14:30.138 01:49:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.138 01:49:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:30.138 01:49:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.138 01:49:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:30.138 01:49:15 -- common/autotest_common.sh@10 -- # set +x 00:14:30.138 [2024-04-15 01:49:15.585384] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:30.138 [2024-04-15 01:49:15.585470] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.138 EAL: No free 2048 kB hugepages reported on node 1 00:14:30.138 [2024-04-15 01:49:15.653761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:30.138 [2024-04-15 01:49:15.743915] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:30.138 [2024-04-15 01:49:15.744067] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.138 [2024-04-15 01:49:15.744104] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.138 [2024-04-15 01:49:15.744120] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.138 [2024-04-15 01:49:15.744181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.138 [2024-04-15 01:49:15.744302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.138 [2024-04-15 01:49:15.744305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:31.073 01:49:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:31.073 01:49:16 -- common/autotest_common.sh@852 -- # return 0 00:14:31.073 01:49:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:31.073 01:49:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.073 01:49:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:31.073 01:49:16 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:31.073 01:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.073 [2024-04-15 01:49:16.595237] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:31.073 01:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.073 01:49:16 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:31.073 01:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.073 01:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.073 01:49:16 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:31.073 01:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.073 [2024-04-15 01:49:16.619198] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:31.073 01:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.073 01:49:16 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:31.073 01:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.073 NULL1 00:14:31.073 01:49:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.073 01:49:16 -- target/connect_stress.sh@21 -- # PERF_PID=2121363 00:14:31.073 01:49:16 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.073 01:49:16 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:31.073 01:49:16 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:31.073 01:49:16 -- target/connect_stress.sh@28 -- # cat 00:14:31.073 01:49:16 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:31.073 01:49:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.073 01:49:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.073 01:49:16 -- common/autotest_common.sh@10 -- # set +x 00:14:31.638 01:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.638 01:49:17 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:31.638 01:49:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.638 01:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.638 01:49:17 -- common/autotest_common.sh@10 -- # set +x 00:14:31.896 01:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:31.896 01:49:17 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:31.896 01:49:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:31.896 01:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:31.896 01:49:17 -- common/autotest_common.sh@10 -- # set +x 00:14:32.153 01:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.153 01:49:17 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:32.153 01:49:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.153 01:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.153 01:49:17 -- common/autotest_common.sh@10 -- # set +x 00:14:32.410 01:49:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.410 01:49:17 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:32.410 01:49:17 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.410 01:49:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.410 01:49:17 -- common/autotest_common.sh@10 -- # set +x 00:14:32.668 01:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:32.668 01:49:18 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:32.668 01:49:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:32.668 01:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:32.668 01:49:18 -- common/autotest_common.sh@10 -- # set +x 00:14:33.232 01:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.232 01:49:18 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:33.232 01:49:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.232 01:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.232 01:49:18 -- common/autotest_common.sh@10 -- # set +x 00:14:33.489 01:49:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.489 01:49:18 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:33.489 01:49:18 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.489 01:49:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.489 01:49:18 -- common/autotest_common.sh@10 -- # set +x 00:14:33.746 01:49:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.746 01:49:19 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:33.746 01:49:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:33.746 01:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.746 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:14:34.003 01:49:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.003 01:49:19 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:34.003 01:49:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.003 01:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.003 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:14:34.261 01:49:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.261 01:49:19 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:34.261 01:49:19 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.261 01:49:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.261 01:49:19 -- common/autotest_common.sh@10 -- # set +x 00:14:34.825 01:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.825 01:49:20 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:34.825 01:49:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:34.825 01:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.825 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:14:35.082 01:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.082 01:49:20 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:35.082 01:49:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.082 01:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.082 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:14:35.339 01:49:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.339 01:49:20 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:35.340 01:49:20 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.340 01:49:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.340 01:49:20 -- common/autotest_common.sh@10 -- # set +x 00:14:35.597 01:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.597 01:49:21 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:35.597 01:49:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.597 01:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.597 01:49:21 -- common/autotest_common.sh@10 -- # set +x 00:14:35.880 01:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:35.880 01:49:21 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:35.880 01:49:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:35.880 01:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:35.880 01:49:21 -- common/autotest_common.sh@10 -- # set +x 00:14:36.447 01:49:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.447 01:49:21 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:36.447 01:49:21 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.447 01:49:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.447 01:49:21 -- common/autotest_common.sh@10 -- # set +x 00:14:36.704 01:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.704 01:49:22 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:36.704 01:49:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.704 01:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.704 01:49:22 -- common/autotest_common.sh@10 -- # set +x 00:14:36.962 01:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:36.962 01:49:22 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:36.962 01:49:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:36.962 01:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:36.962 01:49:22 -- common/autotest_common.sh@10 -- # set +x 00:14:37.220 01:49:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.220 01:49:22 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:37.220 01:49:22 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.220 01:49:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.220 01:49:22 -- common/autotest_common.sh@10 -- # set +x 00:14:37.478 01:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:37.478 01:49:23 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:37.478 01:49:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:37.478 01:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:37.478 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:14:38.043 01:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.043 01:49:23 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:38.043 01:49:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.043 01:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.043 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:14:38.300 01:49:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.300 01:49:23 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:38.300 01:49:23 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.300 01:49:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.300 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:14:38.558 01:49:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.558 01:49:24 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:38.558 01:49:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.558 01:49:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.558 01:49:24 -- common/autotest_common.sh@10 -- # set +x 00:14:38.816 01:49:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:38.816 01:49:24 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:38.816 01:49:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:38.816 01:49:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:38.816 01:49:24 -- common/autotest_common.sh@10 -- # set +x 00:14:39.073 01:49:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.073 01:49:24 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:39.073 01:49:24 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.074 01:49:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.074 01:49:24 -- common/autotest_common.sh@10 -- # set +x 00:14:39.639 01:49:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.639 01:49:25 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:39.639 01:49:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.639 01:49:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.639 01:49:25 -- common/autotest_common.sh@10 -- # set +x 00:14:39.897 01:49:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:39.897 01:49:25 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:39.897 01:49:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:39.897 01:49:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:39.897 01:49:25 -- common/autotest_common.sh@10 -- # set +x 00:14:40.155 01:49:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.155 01:49:25 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:40.155 01:49:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.155 01:49:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.155 01:49:25 -- common/autotest_common.sh@10 -- # set +x 00:14:40.413 01:49:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.413 01:49:25 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:40.413 01:49:25 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.413 01:49:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.413 01:49:25 -- common/autotest_common.sh@10 -- # set +x 00:14:40.978 01:49:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:40.978 01:49:26 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:40.978 01:49:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:40.978 01:49:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:40.978 01:49:26 -- common/autotest_common.sh@10 -- # set +x 00:14:41.236 01:49:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.236 01:49:26 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:41.236 01:49:26 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:41.236 01:49:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:41.236 01:49:26 -- common/autotest_common.sh@10 -- # set +x 00:14:41.236 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.494 01:49:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:41.494 01:49:26 -- target/connect_stress.sh@34 -- # kill -0 2121363 00:14:41.494 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2121363) - No such process 00:14:41.494 01:49:26 -- target/connect_stress.sh@38 -- # wait 2121363 00:14:41.494 01:49:26 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:41.494 01:49:26 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:41.494 01:49:26 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:41.494 01:49:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:41.494 01:49:26 -- nvmf/common.sh@116 -- # sync 00:14:41.494 01:49:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:41.494 01:49:26 -- nvmf/common.sh@119 -- # set +e 00:14:41.494 01:49:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:41.494 01:49:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:41.494 rmmod nvme_tcp 00:14:41.494 rmmod nvme_fabrics 00:14:41.494 rmmod nvme_keyring 00:14:41.494 01:49:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:41.494 01:49:27 -- nvmf/common.sh@123 -- # set -e 00:14:41.494 01:49:27 -- nvmf/common.sh@124 -- # return 0 00:14:41.494 01:49:27 -- nvmf/common.sh@477 -- # '[' -n 2121205 ']' 00:14:41.494 01:49:27 -- nvmf/common.sh@478 -- # killprocess 2121205 00:14:41.494 01:49:27 -- common/autotest_common.sh@926 -- # '[' -z 2121205 ']' 00:14:41.494 01:49:27 -- common/autotest_common.sh@930 -- # kill -0 2121205 00:14:41.494 01:49:27 -- common/autotest_common.sh@931 -- # uname 00:14:41.494 01:49:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:41.494 01:49:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2121205 00:14:41.494 01:49:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:41.494 01:49:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:41.494 01:49:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2121205' 00:14:41.494 killing process with pid 2121205 00:14:41.494 01:49:27 -- common/autotest_common.sh@945 -- # kill 2121205 00:14:41.494 01:49:27 -- common/autotest_common.sh@950 -- # wait 2121205 00:14:41.752 01:49:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:41.752 01:49:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:41.752 01:49:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:41.752 01:49:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:41.752 01:49:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:41.752 01:49:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:41.752 01:49:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:41.752 01:49:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.289 01:49:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:44.289 00:14:44.289 real 0m15.939s 00:14:44.289 user 0m40.434s 00:14:44.289 sys 0m6.046s 00:14:44.289 01:49:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:44.289 01:49:29 -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 ************************************ 00:14:44.289 END TEST nvmf_connect_stress 00:14:44.289 ************************************ 00:14:44.289 01:49:29 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.289 01:49:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:44.289 01:49:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:44.289 01:49:29 -- common/autotest_common.sh@10 -- # set +x 00:14:44.289 ************************************ 00:14:44.289 START TEST nvmf_fused_ordering 00:14:44.289 ************************************ 00:14:44.289 01:49:29 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:44.289 * Looking for test storage... 00:14:44.289 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:44.289 01:49:29 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:44.289 01:49:29 -- nvmf/common.sh@7 -- # uname -s 00:14:44.289 01:49:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:44.289 01:49:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:44.289 01:49:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:44.289 01:49:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:44.289 01:49:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:44.289 01:49:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:44.289 01:49:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:44.289 01:49:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:44.289 01:49:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:44.289 01:49:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:44.289 01:49:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.289 01:49:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:44.289 01:49:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:44.289 01:49:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:44.289 01:49:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:44.289 01:49:29 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:44.289 01:49:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:44.289 01:49:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:44.289 01:49:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:44.289 01:49:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.290 01:49:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.290 01:49:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.290 01:49:29 -- paths/export.sh@5 -- # export PATH 00:14:44.290 01:49:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:44.290 01:49:29 -- nvmf/common.sh@46 -- # : 0 00:14:44.290 01:49:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:44.290 01:49:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:44.290 01:49:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:44.290 01:49:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:44.290 01:49:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:44.290 01:49:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:44.290 01:49:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:44.290 01:49:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:44.290 01:49:29 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:44.290 01:49:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:44.290 01:49:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:44.290 01:49:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:44.290 01:49:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:44.290 01:49:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:44.290 01:49:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:44.290 01:49:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:44.290 01:49:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:44.290 01:49:29 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:44.290 01:49:29 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:44.290 01:49:29 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:44.290 01:49:29 -- common/autotest_common.sh@10 -- # set +x 00:14:45.665 01:49:31 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:45.665 01:49:31 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:45.665 01:49:31 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:45.665 01:49:31 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:45.665 01:49:31 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:45.665 01:49:31 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:45.665 01:49:31 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:45.665 01:49:31 -- nvmf/common.sh@294 -- # net_devs=() 00:14:45.665 01:49:31 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:45.923 01:49:31 -- nvmf/common.sh@295 -- # e810=() 00:14:45.923 01:49:31 -- nvmf/common.sh@295 -- # local -ga e810 00:14:45.923 01:49:31 -- nvmf/common.sh@296 -- # x722=() 00:14:45.923 01:49:31 -- nvmf/common.sh@296 -- # local -ga x722 00:14:45.923 01:49:31 -- nvmf/common.sh@297 -- # mlx=() 00:14:45.923 01:49:31 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:45.923 01:49:31 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:45.923 01:49:31 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:45.923 01:49:31 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:45.923 01:49:31 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.923 01:49:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:45.923 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:45.923 01:49:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:45.923 01:49:31 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:45.923 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:45.923 01:49:31 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.923 01:49:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.923 01:49:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.923 01:49:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:45.923 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:45.923 01:49:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.923 01:49:31 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:45.923 01:49:31 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:45.923 01:49:31 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:45.923 01:49:31 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:45.923 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:45.923 01:49:31 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:45.923 01:49:31 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:45.923 01:49:31 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:45.923 01:49:31 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:45.923 01:49:31 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:45.923 01:49:31 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:45.923 01:49:31 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:45.923 01:49:31 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:45.923 01:49:31 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:45.923 01:49:31 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:45.923 01:49:31 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:45.923 01:49:31 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:45.923 01:49:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:45.923 01:49:31 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:45.923 01:49:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:45.923 01:49:31 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:45.923 01:49:31 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:45.923 01:49:31 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:45.923 01:49:31 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:45.924 01:49:31 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:45.924 01:49:31 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:45.924 01:49:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:45.924 01:49:31 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:45.924 01:49:31 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:45.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:45.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:14:45.924 00:14:45.924 --- 10.0.0.2 ping statistics --- 00:14:45.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.924 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:14:45.924 01:49:31 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:45.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:45.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:14:45.924 00:14:45.924 --- 10.0.0.1 ping statistics --- 00:14:45.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:45.924 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:14:45.924 01:49:31 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:45.924 01:49:31 -- nvmf/common.sh@410 -- # return 0 00:14:45.924 01:49:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:45.924 01:49:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:45.924 01:49:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:45.924 01:49:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:45.924 01:49:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:45.924 01:49:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:45.924 01:49:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:45.924 01:49:31 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:45.924 01:49:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:45.924 01:49:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:45.924 01:49:31 -- common/autotest_common.sh@10 -- # set +x 00:14:45.924 01:49:31 -- nvmf/common.sh@469 -- # nvmfpid=2124560 00:14:45.924 01:49:31 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:45.924 01:49:31 -- nvmf/common.sh@470 -- # waitforlisten 2124560 00:14:45.924 01:49:31 -- common/autotest_common.sh@819 -- # '[' -z 2124560 ']' 00:14:45.924 01:49:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.924 01:49:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:45.924 01:49:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.924 01:49:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:45.924 01:49:31 -- common/autotest_common.sh@10 -- # set +x 00:14:45.924 [2024-04-15 01:49:31.528773] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:45.924 [2024-04-15 01:49:31.528838] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:45.924 EAL: No free 2048 kB hugepages reported on node 1 00:14:46.182 [2024-04-15 01:49:31.595283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:46.182 [2024-04-15 01:49:31.683516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:46.182 [2024-04-15 01:49:31.683670] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:46.182 [2024-04-15 01:49:31.683689] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:46.182 [2024-04-15 01:49:31.683703] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:46.182 [2024-04-15 01:49:31.683733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.115 01:49:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:47.115 01:49:32 -- common/autotest_common.sh@852 -- # return 0 00:14:47.115 01:49:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:47.115 01:49:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:47.115 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.115 01:49:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.115 01:49:32 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:47.115 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.115 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.115 [2024-04-15 01:49:32.549569] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.115 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.115 01:49:32 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:47.115 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.115 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.115 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.115 01:49:32 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.115 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.115 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.115 [2024-04-15 01:49:32.565751] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.115 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.115 01:49:32 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:47.115 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.115 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.116 NULL1 00:14:47.116 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.116 01:49:32 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:47.116 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.116 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.116 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.116 01:49:32 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:47.116 01:49:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:47.116 01:49:32 -- common/autotest_common.sh@10 -- # set +x 00:14:47.116 01:49:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:47.116 01:49:32 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:47.116 [2024-04-15 01:49:32.610994] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:47.116 [2024-04-15 01:49:32.611037] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2124715 ] 00:14:47.116 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.048 Attached to nqn.2016-06.io.spdk:cnode1 00:14:48.048 Namespace ID: 1 size: 1GB 00:14:48.048 fused_ordering(0) 00:14:48.048 fused_ordering(1) 00:14:48.048 fused_ordering(2) 00:14:48.048 fused_ordering(3) 00:14:48.048 fused_ordering(4) 00:14:48.048 fused_ordering(5) 00:14:48.048 fused_ordering(6) 00:14:48.048 fused_ordering(7) 00:14:48.048 fused_ordering(8) 00:14:48.048 fused_ordering(9) 00:14:48.048 fused_ordering(10) 00:14:48.048 fused_ordering(11) 00:14:48.048 fused_ordering(12) 00:14:48.048 fused_ordering(13) 00:14:48.048 fused_ordering(14) 00:14:48.048 fused_ordering(15) 00:14:48.048 fused_ordering(16) 00:14:48.048 fused_ordering(17) 00:14:48.048 fused_ordering(18) 00:14:48.048 fused_ordering(19) 00:14:48.048 fused_ordering(20) 00:14:48.048 fused_ordering(21) 00:14:48.048 fused_ordering(22) 00:14:48.049 fused_ordering(23) 00:14:48.049 fused_ordering(24) 00:14:48.049 fused_ordering(25) 00:14:48.049 fused_ordering(26) 00:14:48.049 fused_ordering(27) 00:14:48.049 fused_ordering(28) 00:14:48.049 fused_ordering(29) 00:14:48.049 fused_ordering(30) 00:14:48.049 fused_ordering(31) 00:14:48.049 fused_ordering(32) 00:14:48.049 fused_ordering(33) 00:14:48.049 fused_ordering(34) 00:14:48.049 fused_ordering(35) 00:14:48.049 fused_ordering(36) 00:14:48.049 fused_ordering(37) 00:14:48.049 fused_ordering(38) 00:14:48.049 fused_ordering(39) 00:14:48.049 fused_ordering(40) 00:14:48.049 fused_ordering(41) 00:14:48.049 fused_ordering(42) 00:14:48.049 fused_ordering(43) 00:14:48.049 fused_ordering(44) 00:14:48.049 fused_ordering(45) 00:14:48.049 fused_ordering(46) 00:14:48.049 fused_ordering(47) 00:14:48.049 fused_ordering(48) 00:14:48.049 fused_ordering(49) 00:14:48.049 fused_ordering(50) 00:14:48.049 fused_ordering(51) 00:14:48.049 fused_ordering(52) 00:14:48.049 fused_ordering(53) 00:14:48.049 fused_ordering(54) 00:14:48.049 fused_ordering(55) 00:14:48.049 fused_ordering(56) 00:14:48.049 fused_ordering(57) 00:14:48.049 fused_ordering(58) 00:14:48.049 fused_ordering(59) 00:14:48.049 fused_ordering(60) 00:14:48.049 fused_ordering(61) 00:14:48.049 fused_ordering(62) 00:14:48.049 fused_ordering(63) 00:14:48.049 fused_ordering(64) 00:14:48.049 fused_ordering(65) 00:14:48.049 fused_ordering(66) 00:14:48.049 fused_ordering(67) 00:14:48.049 fused_ordering(68) 00:14:48.049 fused_ordering(69) 00:14:48.049 fused_ordering(70) 00:14:48.049 fused_ordering(71) 00:14:48.049 fused_ordering(72) 00:14:48.049 fused_ordering(73) 00:14:48.049 fused_ordering(74) 00:14:48.049 fused_ordering(75) 00:14:48.049 fused_ordering(76) 00:14:48.049 fused_ordering(77) 00:14:48.049 fused_ordering(78) 00:14:48.049 fused_ordering(79) 00:14:48.049 fused_ordering(80) 00:14:48.049 fused_ordering(81) 00:14:48.049 fused_ordering(82) 00:14:48.049 fused_ordering(83) 00:14:48.049 fused_ordering(84) 00:14:48.049 fused_ordering(85) 00:14:48.049 fused_ordering(86) 00:14:48.049 fused_ordering(87) 00:14:48.049 fused_ordering(88) 00:14:48.049 fused_ordering(89) 00:14:48.049 fused_ordering(90) 00:14:48.049 fused_ordering(91) 00:14:48.049 fused_ordering(92) 00:14:48.049 fused_ordering(93) 00:14:48.049 fused_ordering(94) 00:14:48.049 fused_ordering(95) 00:14:48.049 fused_ordering(96) 00:14:48.049 fused_ordering(97) 00:14:48.049 fused_ordering(98) 00:14:48.049 fused_ordering(99) 00:14:48.049 fused_ordering(100) 00:14:48.049 fused_ordering(101) 00:14:48.049 fused_ordering(102) 00:14:48.049 fused_ordering(103) 00:14:48.049 fused_ordering(104) 00:14:48.049 fused_ordering(105) 00:14:48.049 fused_ordering(106) 00:14:48.049 fused_ordering(107) 00:14:48.049 fused_ordering(108) 00:14:48.049 fused_ordering(109) 00:14:48.049 fused_ordering(110) 00:14:48.049 fused_ordering(111) 00:14:48.049 fused_ordering(112) 00:14:48.049 fused_ordering(113) 00:14:48.049 fused_ordering(114) 00:14:48.049 fused_ordering(115) 00:14:48.049 fused_ordering(116) 00:14:48.049 fused_ordering(117) 00:14:48.049 fused_ordering(118) 00:14:48.049 fused_ordering(119) 00:14:48.049 fused_ordering(120) 00:14:48.049 fused_ordering(121) 00:14:48.049 fused_ordering(122) 00:14:48.049 fused_ordering(123) 00:14:48.049 fused_ordering(124) 00:14:48.049 fused_ordering(125) 00:14:48.049 fused_ordering(126) 00:14:48.049 fused_ordering(127) 00:14:48.049 fused_ordering(128) 00:14:48.049 fused_ordering(129) 00:14:48.049 fused_ordering(130) 00:14:48.049 fused_ordering(131) 00:14:48.049 fused_ordering(132) 00:14:48.049 fused_ordering(133) 00:14:48.049 fused_ordering(134) 00:14:48.049 fused_ordering(135) 00:14:48.049 fused_ordering(136) 00:14:48.049 fused_ordering(137) 00:14:48.049 fused_ordering(138) 00:14:48.049 fused_ordering(139) 00:14:48.049 fused_ordering(140) 00:14:48.049 fused_ordering(141) 00:14:48.049 fused_ordering(142) 00:14:48.049 fused_ordering(143) 00:14:48.049 fused_ordering(144) 00:14:48.049 fused_ordering(145) 00:14:48.049 fused_ordering(146) 00:14:48.049 fused_ordering(147) 00:14:48.049 fused_ordering(148) 00:14:48.049 fused_ordering(149) 00:14:48.049 fused_ordering(150) 00:14:48.049 fused_ordering(151) 00:14:48.049 fused_ordering(152) 00:14:48.049 fused_ordering(153) 00:14:48.049 fused_ordering(154) 00:14:48.049 fused_ordering(155) 00:14:48.049 fused_ordering(156) 00:14:48.049 fused_ordering(157) 00:14:48.049 fused_ordering(158) 00:14:48.049 fused_ordering(159) 00:14:48.049 fused_ordering(160) 00:14:48.049 fused_ordering(161) 00:14:48.049 fused_ordering(162) 00:14:48.049 fused_ordering(163) 00:14:48.049 fused_ordering(164) 00:14:48.049 fused_ordering(165) 00:14:48.049 fused_ordering(166) 00:14:48.049 fused_ordering(167) 00:14:48.049 fused_ordering(168) 00:14:48.049 fused_ordering(169) 00:14:48.049 fused_ordering(170) 00:14:48.049 fused_ordering(171) 00:14:48.049 fused_ordering(172) 00:14:48.049 fused_ordering(173) 00:14:48.049 fused_ordering(174) 00:14:48.049 fused_ordering(175) 00:14:48.049 fused_ordering(176) 00:14:48.049 fused_ordering(177) 00:14:48.049 fused_ordering(178) 00:14:48.049 fused_ordering(179) 00:14:48.049 fused_ordering(180) 00:14:48.049 fused_ordering(181) 00:14:48.049 fused_ordering(182) 00:14:48.049 fused_ordering(183) 00:14:48.049 fused_ordering(184) 00:14:48.049 fused_ordering(185) 00:14:48.049 fused_ordering(186) 00:14:48.049 fused_ordering(187) 00:14:48.049 fused_ordering(188) 00:14:48.049 fused_ordering(189) 00:14:48.049 fused_ordering(190) 00:14:48.049 fused_ordering(191) 00:14:48.049 fused_ordering(192) 00:14:48.049 fused_ordering(193) 00:14:48.049 fused_ordering(194) 00:14:48.049 fused_ordering(195) 00:14:48.049 fused_ordering(196) 00:14:48.049 fused_ordering(197) 00:14:48.049 fused_ordering(198) 00:14:48.049 fused_ordering(199) 00:14:48.049 fused_ordering(200) 00:14:48.049 fused_ordering(201) 00:14:48.049 fused_ordering(202) 00:14:48.049 fused_ordering(203) 00:14:48.049 fused_ordering(204) 00:14:48.049 fused_ordering(205) 00:14:48.983 fused_ordering(206) 00:14:48.983 fused_ordering(207) 00:14:48.983 fused_ordering(208) 00:14:48.983 fused_ordering(209) 00:14:48.983 fused_ordering(210) 00:14:48.983 fused_ordering(211) 00:14:48.983 fused_ordering(212) 00:14:48.983 fused_ordering(213) 00:14:48.983 fused_ordering(214) 00:14:48.983 fused_ordering(215) 00:14:48.983 fused_ordering(216) 00:14:48.983 fused_ordering(217) 00:14:48.983 fused_ordering(218) 00:14:48.983 fused_ordering(219) 00:14:48.983 fused_ordering(220) 00:14:48.983 fused_ordering(221) 00:14:48.983 fused_ordering(222) 00:14:48.983 fused_ordering(223) 00:14:48.983 fused_ordering(224) 00:14:48.983 fused_ordering(225) 00:14:48.983 fused_ordering(226) 00:14:48.983 fused_ordering(227) 00:14:48.983 fused_ordering(228) 00:14:48.983 fused_ordering(229) 00:14:48.983 fused_ordering(230) 00:14:48.983 fused_ordering(231) 00:14:48.983 fused_ordering(232) 00:14:48.983 fused_ordering(233) 00:14:48.983 fused_ordering(234) 00:14:48.983 fused_ordering(235) 00:14:48.983 fused_ordering(236) 00:14:48.983 fused_ordering(237) 00:14:48.983 fused_ordering(238) 00:14:48.983 fused_ordering(239) 00:14:48.983 fused_ordering(240) 00:14:48.983 fused_ordering(241) 00:14:48.983 fused_ordering(242) 00:14:48.983 fused_ordering(243) 00:14:48.983 fused_ordering(244) 00:14:48.983 fused_ordering(245) 00:14:48.983 fused_ordering(246) 00:14:48.983 fused_ordering(247) 00:14:48.983 fused_ordering(248) 00:14:48.983 fused_ordering(249) 00:14:48.983 fused_ordering(250) 00:14:48.983 fused_ordering(251) 00:14:48.983 fused_ordering(252) 00:14:48.983 fused_ordering(253) 00:14:48.983 fused_ordering(254) 00:14:48.983 fused_ordering(255) 00:14:48.983 fused_ordering(256) 00:14:48.983 fused_ordering(257) 00:14:48.983 fused_ordering(258) 00:14:48.983 fused_ordering(259) 00:14:48.983 fused_ordering(260) 00:14:48.983 fused_ordering(261) 00:14:48.983 fused_ordering(262) 00:14:48.983 fused_ordering(263) 00:14:48.983 fused_ordering(264) 00:14:48.983 fused_ordering(265) 00:14:48.983 fused_ordering(266) 00:14:48.983 fused_ordering(267) 00:14:48.983 fused_ordering(268) 00:14:48.983 fused_ordering(269) 00:14:48.983 fused_ordering(270) 00:14:48.983 fused_ordering(271) 00:14:48.983 fused_ordering(272) 00:14:48.983 fused_ordering(273) 00:14:48.983 fused_ordering(274) 00:14:48.983 fused_ordering(275) 00:14:48.983 fused_ordering(276) 00:14:48.983 fused_ordering(277) 00:14:48.983 fused_ordering(278) 00:14:48.983 fused_ordering(279) 00:14:48.983 fused_ordering(280) 00:14:48.983 fused_ordering(281) 00:14:48.983 fused_ordering(282) 00:14:48.983 fused_ordering(283) 00:14:48.983 fused_ordering(284) 00:14:48.983 fused_ordering(285) 00:14:48.983 fused_ordering(286) 00:14:48.983 fused_ordering(287) 00:14:48.983 fused_ordering(288) 00:14:48.983 fused_ordering(289) 00:14:48.983 fused_ordering(290) 00:14:48.983 fused_ordering(291) 00:14:48.983 fused_ordering(292) 00:14:48.983 fused_ordering(293) 00:14:48.983 fused_ordering(294) 00:14:48.983 fused_ordering(295) 00:14:48.983 fused_ordering(296) 00:14:48.983 fused_ordering(297) 00:14:48.983 fused_ordering(298) 00:14:48.983 fused_ordering(299) 00:14:48.983 fused_ordering(300) 00:14:48.984 fused_ordering(301) 00:14:48.984 fused_ordering(302) 00:14:48.984 fused_ordering(303) 00:14:48.984 fused_ordering(304) 00:14:48.984 fused_ordering(305) 00:14:48.984 fused_ordering(306) 00:14:48.984 fused_ordering(307) 00:14:48.984 fused_ordering(308) 00:14:48.984 fused_ordering(309) 00:14:48.984 fused_ordering(310) 00:14:48.984 fused_ordering(311) 00:14:48.984 fused_ordering(312) 00:14:48.984 fused_ordering(313) 00:14:48.984 fused_ordering(314) 00:14:48.984 fused_ordering(315) 00:14:48.984 fused_ordering(316) 00:14:48.984 fused_ordering(317) 00:14:48.984 fused_ordering(318) 00:14:48.984 fused_ordering(319) 00:14:48.984 fused_ordering(320) 00:14:48.984 fused_ordering(321) 00:14:48.984 fused_ordering(322) 00:14:48.984 fused_ordering(323) 00:14:48.984 fused_ordering(324) 00:14:48.984 fused_ordering(325) 00:14:48.984 fused_ordering(326) 00:14:48.984 fused_ordering(327) 00:14:48.984 fused_ordering(328) 00:14:48.984 fused_ordering(329) 00:14:48.984 fused_ordering(330) 00:14:48.984 fused_ordering(331) 00:14:48.984 fused_ordering(332) 00:14:48.984 fused_ordering(333) 00:14:48.984 fused_ordering(334) 00:14:48.984 fused_ordering(335) 00:14:48.984 fused_ordering(336) 00:14:48.984 fused_ordering(337) 00:14:48.984 fused_ordering(338) 00:14:48.984 fused_ordering(339) 00:14:48.984 fused_ordering(340) 00:14:48.984 fused_ordering(341) 00:14:48.984 fused_ordering(342) 00:14:48.984 fused_ordering(343) 00:14:48.984 fused_ordering(344) 00:14:48.984 fused_ordering(345) 00:14:48.984 fused_ordering(346) 00:14:48.984 fused_ordering(347) 00:14:48.984 fused_ordering(348) 00:14:48.984 fused_ordering(349) 00:14:48.984 fused_ordering(350) 00:14:48.984 fused_ordering(351) 00:14:48.984 fused_ordering(352) 00:14:48.984 fused_ordering(353) 00:14:48.984 fused_ordering(354) 00:14:48.984 fused_ordering(355) 00:14:48.984 fused_ordering(356) 00:14:48.984 fused_ordering(357) 00:14:48.984 fused_ordering(358) 00:14:48.984 fused_ordering(359) 00:14:48.984 fused_ordering(360) 00:14:48.984 fused_ordering(361) 00:14:48.984 fused_ordering(362) 00:14:48.984 fused_ordering(363) 00:14:48.984 fused_ordering(364) 00:14:48.984 fused_ordering(365) 00:14:48.984 fused_ordering(366) 00:14:48.984 fused_ordering(367) 00:14:48.984 fused_ordering(368) 00:14:48.984 fused_ordering(369) 00:14:48.984 fused_ordering(370) 00:14:48.984 fused_ordering(371) 00:14:48.984 fused_ordering(372) 00:14:48.984 fused_ordering(373) 00:14:48.984 fused_ordering(374) 00:14:48.984 fused_ordering(375) 00:14:48.984 fused_ordering(376) 00:14:48.984 fused_ordering(377) 00:14:48.984 fused_ordering(378) 00:14:48.984 fused_ordering(379) 00:14:48.984 fused_ordering(380) 00:14:48.984 fused_ordering(381) 00:14:48.984 fused_ordering(382) 00:14:48.984 fused_ordering(383) 00:14:48.984 fused_ordering(384) 00:14:48.984 fused_ordering(385) 00:14:48.984 fused_ordering(386) 00:14:48.984 fused_ordering(387) 00:14:48.984 fused_ordering(388) 00:14:48.984 fused_ordering(389) 00:14:48.984 fused_ordering(390) 00:14:48.984 fused_ordering(391) 00:14:48.984 fused_ordering(392) 00:14:48.984 fused_ordering(393) 00:14:48.984 fused_ordering(394) 00:14:48.984 fused_ordering(395) 00:14:48.984 fused_ordering(396) 00:14:48.984 fused_ordering(397) 00:14:48.984 fused_ordering(398) 00:14:48.984 fused_ordering(399) 00:14:48.984 fused_ordering(400) 00:14:48.984 fused_ordering(401) 00:14:48.984 fused_ordering(402) 00:14:48.984 fused_ordering(403) 00:14:48.984 fused_ordering(404) 00:14:48.984 fused_ordering(405) 00:14:48.984 fused_ordering(406) 00:14:48.984 fused_ordering(407) 00:14:48.984 fused_ordering(408) 00:14:48.984 fused_ordering(409) 00:14:48.984 fused_ordering(410) 00:14:49.919 fused_ordering(411) 00:14:49.919 fused_ordering(412) 00:14:49.919 fused_ordering(413) 00:14:49.919 fused_ordering(414) 00:14:49.919 fused_ordering(415) 00:14:49.919 fused_ordering(416) 00:14:49.919 fused_ordering(417) 00:14:49.919 fused_ordering(418) 00:14:49.919 fused_ordering(419) 00:14:49.919 fused_ordering(420) 00:14:49.919 fused_ordering(421) 00:14:49.919 fused_ordering(422) 00:14:49.919 fused_ordering(423) 00:14:49.919 fused_ordering(424) 00:14:49.919 fused_ordering(425) 00:14:49.919 fused_ordering(426) 00:14:49.919 fused_ordering(427) 00:14:49.919 fused_ordering(428) 00:14:49.919 fused_ordering(429) 00:14:49.919 fused_ordering(430) 00:14:49.919 fused_ordering(431) 00:14:49.919 fused_ordering(432) 00:14:49.919 fused_ordering(433) 00:14:49.919 fused_ordering(434) 00:14:49.919 fused_ordering(435) 00:14:49.919 fused_ordering(436) 00:14:49.919 fused_ordering(437) 00:14:49.919 fused_ordering(438) 00:14:49.919 fused_ordering(439) 00:14:49.919 fused_ordering(440) 00:14:49.919 fused_ordering(441) 00:14:49.919 fused_ordering(442) 00:14:49.919 fused_ordering(443) 00:14:49.919 fused_ordering(444) 00:14:49.919 fused_ordering(445) 00:14:49.919 fused_ordering(446) 00:14:49.919 fused_ordering(447) 00:14:49.919 fused_ordering(448) 00:14:49.919 fused_ordering(449) 00:14:49.919 fused_ordering(450) 00:14:49.919 fused_ordering(451) 00:14:49.919 fused_ordering(452) 00:14:49.919 fused_ordering(453) 00:14:49.919 fused_ordering(454) 00:14:49.919 fused_ordering(455) 00:14:49.919 fused_ordering(456) 00:14:49.919 fused_ordering(457) 00:14:49.919 fused_ordering(458) 00:14:49.919 fused_ordering(459) 00:14:49.919 fused_ordering(460) 00:14:49.919 fused_ordering(461) 00:14:49.919 fused_ordering(462) 00:14:49.919 fused_ordering(463) 00:14:49.919 fused_ordering(464) 00:14:49.919 fused_ordering(465) 00:14:49.919 fused_ordering(466) 00:14:49.919 fused_ordering(467) 00:14:49.919 fused_ordering(468) 00:14:49.919 fused_ordering(469) 00:14:49.919 fused_ordering(470) 00:14:49.919 fused_ordering(471) 00:14:49.919 fused_ordering(472) 00:14:49.919 fused_ordering(473) 00:14:49.919 fused_ordering(474) 00:14:49.919 fused_ordering(475) 00:14:49.919 fused_ordering(476) 00:14:49.919 fused_ordering(477) 00:14:49.919 fused_ordering(478) 00:14:49.919 fused_ordering(479) 00:14:49.919 fused_ordering(480) 00:14:49.919 fused_ordering(481) 00:14:49.919 fused_ordering(482) 00:14:49.919 fused_ordering(483) 00:14:49.919 fused_ordering(484) 00:14:49.919 fused_ordering(485) 00:14:49.919 fused_ordering(486) 00:14:49.919 fused_ordering(487) 00:14:49.919 fused_ordering(488) 00:14:49.919 fused_ordering(489) 00:14:49.919 fused_ordering(490) 00:14:49.919 fused_ordering(491) 00:14:49.919 fused_ordering(492) 00:14:49.919 fused_ordering(493) 00:14:49.919 fused_ordering(494) 00:14:49.919 fused_ordering(495) 00:14:49.919 fused_ordering(496) 00:14:49.919 fused_ordering(497) 00:14:49.919 fused_ordering(498) 00:14:49.919 fused_ordering(499) 00:14:49.919 fused_ordering(500) 00:14:49.919 fused_ordering(501) 00:14:49.919 fused_ordering(502) 00:14:49.919 fused_ordering(503) 00:14:49.919 fused_ordering(504) 00:14:49.919 fused_ordering(505) 00:14:49.919 fused_ordering(506) 00:14:49.919 fused_ordering(507) 00:14:49.919 fused_ordering(508) 00:14:49.919 fused_ordering(509) 00:14:49.919 fused_ordering(510) 00:14:49.919 fused_ordering(511) 00:14:49.919 fused_ordering(512) 00:14:49.919 fused_ordering(513) 00:14:49.919 fused_ordering(514) 00:14:49.919 fused_ordering(515) 00:14:49.919 fused_ordering(516) 00:14:49.919 fused_ordering(517) 00:14:49.919 fused_ordering(518) 00:14:49.919 fused_ordering(519) 00:14:49.919 fused_ordering(520) 00:14:49.919 fused_ordering(521) 00:14:49.919 fused_ordering(522) 00:14:49.919 fused_ordering(523) 00:14:49.919 fused_ordering(524) 00:14:49.919 fused_ordering(525) 00:14:49.919 fused_ordering(526) 00:14:49.919 fused_ordering(527) 00:14:49.919 fused_ordering(528) 00:14:49.919 fused_ordering(529) 00:14:49.919 fused_ordering(530) 00:14:49.919 fused_ordering(531) 00:14:49.919 fused_ordering(532) 00:14:49.919 fused_ordering(533) 00:14:49.919 fused_ordering(534) 00:14:49.919 fused_ordering(535) 00:14:49.919 fused_ordering(536) 00:14:49.919 fused_ordering(537) 00:14:49.919 fused_ordering(538) 00:14:49.919 fused_ordering(539) 00:14:49.919 fused_ordering(540) 00:14:49.919 fused_ordering(541) 00:14:49.919 fused_ordering(542) 00:14:49.919 fused_ordering(543) 00:14:49.919 fused_ordering(544) 00:14:49.919 fused_ordering(545) 00:14:49.919 fused_ordering(546) 00:14:49.919 fused_ordering(547) 00:14:49.919 fused_ordering(548) 00:14:49.919 fused_ordering(549) 00:14:49.919 fused_ordering(550) 00:14:49.919 fused_ordering(551) 00:14:49.919 fused_ordering(552) 00:14:49.919 fused_ordering(553) 00:14:49.919 fused_ordering(554) 00:14:49.919 fused_ordering(555) 00:14:49.919 fused_ordering(556) 00:14:49.919 fused_ordering(557) 00:14:49.919 fused_ordering(558) 00:14:49.919 fused_ordering(559) 00:14:49.919 fused_ordering(560) 00:14:49.919 fused_ordering(561) 00:14:49.919 fused_ordering(562) 00:14:49.919 fused_ordering(563) 00:14:49.919 fused_ordering(564) 00:14:49.919 fused_ordering(565) 00:14:49.919 fused_ordering(566) 00:14:49.919 fused_ordering(567) 00:14:49.919 fused_ordering(568) 00:14:49.919 fused_ordering(569) 00:14:49.919 fused_ordering(570) 00:14:49.919 fused_ordering(571) 00:14:49.919 fused_ordering(572) 00:14:49.919 fused_ordering(573) 00:14:49.919 fused_ordering(574) 00:14:49.919 fused_ordering(575) 00:14:49.919 fused_ordering(576) 00:14:49.919 fused_ordering(577) 00:14:49.919 fused_ordering(578) 00:14:49.919 fused_ordering(579) 00:14:49.919 fused_ordering(580) 00:14:49.919 fused_ordering(581) 00:14:49.919 fused_ordering(582) 00:14:49.919 fused_ordering(583) 00:14:49.919 fused_ordering(584) 00:14:49.919 fused_ordering(585) 00:14:49.919 fused_ordering(586) 00:14:49.919 fused_ordering(587) 00:14:49.919 fused_ordering(588) 00:14:49.919 fused_ordering(589) 00:14:49.919 fused_ordering(590) 00:14:49.919 fused_ordering(591) 00:14:49.919 fused_ordering(592) 00:14:49.919 fused_ordering(593) 00:14:49.919 fused_ordering(594) 00:14:49.919 fused_ordering(595) 00:14:49.919 fused_ordering(596) 00:14:49.919 fused_ordering(597) 00:14:49.919 fused_ordering(598) 00:14:49.919 fused_ordering(599) 00:14:49.919 fused_ordering(600) 00:14:49.919 fused_ordering(601) 00:14:49.919 fused_ordering(602) 00:14:49.919 fused_ordering(603) 00:14:49.919 fused_ordering(604) 00:14:49.919 fused_ordering(605) 00:14:49.919 fused_ordering(606) 00:14:49.919 fused_ordering(607) 00:14:49.919 fused_ordering(608) 00:14:49.919 fused_ordering(609) 00:14:49.919 fused_ordering(610) 00:14:49.919 fused_ordering(611) 00:14:49.919 fused_ordering(612) 00:14:49.919 fused_ordering(613) 00:14:49.919 fused_ordering(614) 00:14:49.919 fused_ordering(615) 00:14:51.327 fused_ordering(616) 00:14:51.327 fused_ordering(617) 00:14:51.327 fused_ordering(618) 00:14:51.327 fused_ordering(619) 00:14:51.327 fused_ordering(620) 00:14:51.327 fused_ordering(621) 00:14:51.327 fused_ordering(622) 00:14:51.327 fused_ordering(623) 00:14:51.327 fused_ordering(624) 00:14:51.327 fused_ordering(625) 00:14:51.327 fused_ordering(626) 00:14:51.327 fused_ordering(627) 00:14:51.327 fused_ordering(628) 00:14:51.327 fused_ordering(629) 00:14:51.327 fused_ordering(630) 00:14:51.327 fused_ordering(631) 00:14:51.327 fused_ordering(632) 00:14:51.327 fused_ordering(633) 00:14:51.327 fused_ordering(634) 00:14:51.327 fused_ordering(635) 00:14:51.327 fused_ordering(636) 00:14:51.327 fused_ordering(637) 00:14:51.327 fused_ordering(638) 00:14:51.327 fused_ordering(639) 00:14:51.327 fused_ordering(640) 00:14:51.327 fused_ordering(641) 00:14:51.327 fused_ordering(642) 00:14:51.327 fused_ordering(643) 00:14:51.327 fused_ordering(644) 00:14:51.327 fused_ordering(645) 00:14:51.327 fused_ordering(646) 00:14:51.327 fused_ordering(647) 00:14:51.327 fused_ordering(648) 00:14:51.327 fused_ordering(649) 00:14:51.327 fused_ordering(650) 00:14:51.327 fused_ordering(651) 00:14:51.327 fused_ordering(652) 00:14:51.327 fused_ordering(653) 00:14:51.327 fused_ordering(654) 00:14:51.327 fused_ordering(655) 00:14:51.327 fused_ordering(656) 00:14:51.327 fused_ordering(657) 00:14:51.327 fused_ordering(658) 00:14:51.327 fused_ordering(659) 00:14:51.327 fused_ordering(660) 00:14:51.327 fused_ordering(661) 00:14:51.327 fused_ordering(662) 00:14:51.327 fused_ordering(663) 00:14:51.327 fused_ordering(664) 00:14:51.327 fused_ordering(665) 00:14:51.327 fused_ordering(666) 00:14:51.327 fused_ordering(667) 00:14:51.327 fused_ordering(668) 00:14:51.327 fused_ordering(669) 00:14:51.327 fused_ordering(670) 00:14:51.327 fused_ordering(671) 00:14:51.327 fused_ordering(672) 00:14:51.327 fused_ordering(673) 00:14:51.327 fused_ordering(674) 00:14:51.327 fused_ordering(675) 00:14:51.327 fused_ordering(676) 00:14:51.327 fused_ordering(677) 00:14:51.327 fused_ordering(678) 00:14:51.327 fused_ordering(679) 00:14:51.327 fused_ordering(680) 00:14:51.327 fused_ordering(681) 00:14:51.327 fused_ordering(682) 00:14:51.327 fused_ordering(683) 00:14:51.327 fused_ordering(684) 00:14:51.327 fused_ordering(685) 00:14:51.327 fused_ordering(686) 00:14:51.327 fused_ordering(687) 00:14:51.327 fused_ordering(688) 00:14:51.327 fused_ordering(689) 00:14:51.327 fused_ordering(690) 00:14:51.327 fused_ordering(691) 00:14:51.327 fused_ordering(692) 00:14:51.327 fused_ordering(693) 00:14:51.327 fused_ordering(694) 00:14:51.327 fused_ordering(695) 00:14:51.327 fused_ordering(696) 00:14:51.327 fused_ordering(697) 00:14:51.327 fused_ordering(698) 00:14:51.327 fused_ordering(699) 00:14:51.327 fused_ordering(700) 00:14:51.327 fused_ordering(701) 00:14:51.327 fused_ordering(702) 00:14:51.327 fused_ordering(703) 00:14:51.327 fused_ordering(704) 00:14:51.327 fused_ordering(705) 00:14:51.327 fused_ordering(706) 00:14:51.327 fused_ordering(707) 00:14:51.327 fused_ordering(708) 00:14:51.327 fused_ordering(709) 00:14:51.327 fused_ordering(710) 00:14:51.327 fused_ordering(711) 00:14:51.327 fused_ordering(712) 00:14:51.327 fused_ordering(713) 00:14:51.327 fused_ordering(714) 00:14:51.327 fused_ordering(715) 00:14:51.327 fused_ordering(716) 00:14:51.327 fused_ordering(717) 00:14:51.327 fused_ordering(718) 00:14:51.327 fused_ordering(719) 00:14:51.327 fused_ordering(720) 00:14:51.327 fused_ordering(721) 00:14:51.327 fused_ordering(722) 00:14:51.327 fused_ordering(723) 00:14:51.327 fused_ordering(724) 00:14:51.327 fused_ordering(725) 00:14:51.327 fused_ordering(726) 00:14:51.327 fused_ordering(727) 00:14:51.327 fused_ordering(728) 00:14:51.327 fused_ordering(729) 00:14:51.327 fused_ordering(730) 00:14:51.327 fused_ordering(731) 00:14:51.327 fused_ordering(732) 00:14:51.327 fused_ordering(733) 00:14:51.327 fused_ordering(734) 00:14:51.327 fused_ordering(735) 00:14:51.327 fused_ordering(736) 00:14:51.327 fused_ordering(737) 00:14:51.327 fused_ordering(738) 00:14:51.327 fused_ordering(739) 00:14:51.327 fused_ordering(740) 00:14:51.327 fused_ordering(741) 00:14:51.327 fused_ordering(742) 00:14:51.327 fused_ordering(743) 00:14:51.327 fused_ordering(744) 00:14:51.327 fused_ordering(745) 00:14:51.327 fused_ordering(746) 00:14:51.327 fused_ordering(747) 00:14:51.327 fused_ordering(748) 00:14:51.327 fused_ordering(749) 00:14:51.327 fused_ordering(750) 00:14:51.327 fused_ordering(751) 00:14:51.327 fused_ordering(752) 00:14:51.327 fused_ordering(753) 00:14:51.327 fused_ordering(754) 00:14:51.327 fused_ordering(755) 00:14:51.327 fused_ordering(756) 00:14:51.327 fused_ordering(757) 00:14:51.327 fused_ordering(758) 00:14:51.327 fused_ordering(759) 00:14:51.327 fused_ordering(760) 00:14:51.327 fused_ordering(761) 00:14:51.327 fused_ordering(762) 00:14:51.327 fused_ordering(763) 00:14:51.327 fused_ordering(764) 00:14:51.327 fused_ordering(765) 00:14:51.327 fused_ordering(766) 00:14:51.327 fused_ordering(767) 00:14:51.327 fused_ordering(768) 00:14:51.327 fused_ordering(769) 00:14:51.327 fused_ordering(770) 00:14:51.327 fused_ordering(771) 00:14:51.327 fused_ordering(772) 00:14:51.327 fused_ordering(773) 00:14:51.327 fused_ordering(774) 00:14:51.327 fused_ordering(775) 00:14:51.327 fused_ordering(776) 00:14:51.327 fused_ordering(777) 00:14:51.327 fused_ordering(778) 00:14:51.327 fused_ordering(779) 00:14:51.327 fused_ordering(780) 00:14:51.327 fused_ordering(781) 00:14:51.327 fused_ordering(782) 00:14:51.327 fused_ordering(783) 00:14:51.327 fused_ordering(784) 00:14:51.328 fused_ordering(785) 00:14:51.328 fused_ordering(786) 00:14:51.328 fused_ordering(787) 00:14:51.328 fused_ordering(788) 00:14:51.328 fused_ordering(789) 00:14:51.328 fused_ordering(790) 00:14:51.328 fused_ordering(791) 00:14:51.328 fused_ordering(792) 00:14:51.328 fused_ordering(793) 00:14:51.328 fused_ordering(794) 00:14:51.328 fused_ordering(795) 00:14:51.328 fused_ordering(796) 00:14:51.328 fused_ordering(797) 00:14:51.328 fused_ordering(798) 00:14:51.328 fused_ordering(799) 00:14:51.328 fused_ordering(800) 00:14:51.328 fused_ordering(801) 00:14:51.328 fused_ordering(802) 00:14:51.328 fused_ordering(803) 00:14:51.328 fused_ordering(804) 00:14:51.328 fused_ordering(805) 00:14:51.328 fused_ordering(806) 00:14:51.328 fused_ordering(807) 00:14:51.328 fused_ordering(808) 00:14:51.328 fused_ordering(809) 00:14:51.328 fused_ordering(810) 00:14:51.328 fused_ordering(811) 00:14:51.328 fused_ordering(812) 00:14:51.328 fused_ordering(813) 00:14:51.328 fused_ordering(814) 00:14:51.328 fused_ordering(815) 00:14:51.328 fused_ordering(816) 00:14:51.328 fused_ordering(817) 00:14:51.328 fused_ordering(818) 00:14:51.328 fused_ordering(819) 00:14:51.328 fused_ordering(820) 00:14:52.259 fused_ordering(821) 00:14:52.259 fused_ordering(822) 00:14:52.259 fused_ordering(823) 00:14:52.259 fused_ordering(824) 00:14:52.259 fused_ordering(825) 00:14:52.259 fused_ordering(826) 00:14:52.259 fused_ordering(827) 00:14:52.259 fused_ordering(828) 00:14:52.259 fused_ordering(829) 00:14:52.259 fused_ordering(830) 00:14:52.259 fused_ordering(831) 00:14:52.259 fused_ordering(832) 00:14:52.259 fused_ordering(833) 00:14:52.259 fused_ordering(834) 00:14:52.259 fused_ordering(835) 00:14:52.259 fused_ordering(836) 00:14:52.259 fused_ordering(837) 00:14:52.259 fused_ordering(838) 00:14:52.260 fused_ordering(839) 00:14:52.260 fused_ordering(840) 00:14:52.260 fused_ordering(841) 00:14:52.260 fused_ordering(842) 00:14:52.260 fused_ordering(843) 00:14:52.260 fused_ordering(844) 00:14:52.260 fused_ordering(845) 00:14:52.260 fused_ordering(846) 00:14:52.260 fused_ordering(847) 00:14:52.260 fused_ordering(848) 00:14:52.260 fused_ordering(849) 00:14:52.260 fused_ordering(850) 00:14:52.260 fused_ordering(851) 00:14:52.260 fused_ordering(852) 00:14:52.260 fused_ordering(853) 00:14:52.260 fused_ordering(854) 00:14:52.260 fused_ordering(855) 00:14:52.260 fused_ordering(856) 00:14:52.260 fused_ordering(857) 00:14:52.260 fused_ordering(858) 00:14:52.260 fused_ordering(859) 00:14:52.260 fused_ordering(860) 00:14:52.260 fused_ordering(861) 00:14:52.260 fused_ordering(862) 00:14:52.260 fused_ordering(863) 00:14:52.260 fused_ordering(864) 00:14:52.260 fused_ordering(865) 00:14:52.260 fused_ordering(866) 00:14:52.260 fused_ordering(867) 00:14:52.260 fused_ordering(868) 00:14:52.260 fused_ordering(869) 00:14:52.260 fused_ordering(870) 00:14:52.260 fused_ordering(871) 00:14:52.260 fused_ordering(872) 00:14:52.260 fused_ordering(873) 00:14:52.260 fused_ordering(874) 00:14:52.260 fused_ordering(875) 00:14:52.260 fused_ordering(876) 00:14:52.260 fused_ordering(877) 00:14:52.260 fused_ordering(878) 00:14:52.260 fused_ordering(879) 00:14:52.260 fused_ordering(880) 00:14:52.260 fused_ordering(881) 00:14:52.260 fused_ordering(882) 00:14:52.260 fused_ordering(883) 00:14:52.260 fused_ordering(884) 00:14:52.260 fused_ordering(885) 00:14:52.260 fused_ordering(886) 00:14:52.260 fused_ordering(887) 00:14:52.260 fused_ordering(888) 00:14:52.260 fused_ordering(889) 00:14:52.260 fused_ordering(890) 00:14:52.260 fused_ordering(891) 00:14:52.260 fused_ordering(892) 00:14:52.260 fused_ordering(893) 00:14:52.260 fused_ordering(894) 00:14:52.260 fused_ordering(895) 00:14:52.260 fused_ordering(896) 00:14:52.260 fused_ordering(897) 00:14:52.260 fused_ordering(898) 00:14:52.260 fused_ordering(899) 00:14:52.260 fused_ordering(900) 00:14:52.260 fused_ordering(901) 00:14:52.260 fused_ordering(902) 00:14:52.260 fused_ordering(903) 00:14:52.260 fused_ordering(904) 00:14:52.260 fused_ordering(905) 00:14:52.260 fused_ordering(906) 00:14:52.260 fused_ordering(907) 00:14:52.260 fused_ordering(908) 00:14:52.260 fused_ordering(909) 00:14:52.260 fused_ordering(910) 00:14:52.260 fused_ordering(911) 00:14:52.260 fused_ordering(912) 00:14:52.260 fused_ordering(913) 00:14:52.260 fused_ordering(914) 00:14:52.260 fused_ordering(915) 00:14:52.260 fused_ordering(916) 00:14:52.260 fused_ordering(917) 00:14:52.260 fused_ordering(918) 00:14:52.260 fused_ordering(919) 00:14:52.260 fused_ordering(920) 00:14:52.260 fused_ordering(921) 00:14:52.260 fused_ordering(922) 00:14:52.260 fused_ordering(923) 00:14:52.260 fused_ordering(924) 00:14:52.260 fused_ordering(925) 00:14:52.260 fused_ordering(926) 00:14:52.260 fused_ordering(927) 00:14:52.260 fused_ordering(928) 00:14:52.260 fused_ordering(929) 00:14:52.260 fused_ordering(930) 00:14:52.260 fused_ordering(931) 00:14:52.260 fused_ordering(932) 00:14:52.260 fused_ordering(933) 00:14:52.260 fused_ordering(934) 00:14:52.260 fused_ordering(935) 00:14:52.260 fused_ordering(936) 00:14:52.260 fused_ordering(937) 00:14:52.260 fused_ordering(938) 00:14:52.260 fused_ordering(939) 00:14:52.260 fused_ordering(940) 00:14:52.260 fused_ordering(941) 00:14:52.260 fused_ordering(942) 00:14:52.260 fused_ordering(943) 00:14:52.260 fused_ordering(944) 00:14:52.260 fused_ordering(945) 00:14:52.260 fused_ordering(946) 00:14:52.260 fused_ordering(947) 00:14:52.260 fused_ordering(948) 00:14:52.260 fused_ordering(949) 00:14:52.260 fused_ordering(950) 00:14:52.260 fused_ordering(951) 00:14:52.260 fused_ordering(952) 00:14:52.260 fused_ordering(953) 00:14:52.260 fused_ordering(954) 00:14:52.260 fused_ordering(955) 00:14:52.260 fused_ordering(956) 00:14:52.260 fused_ordering(957) 00:14:52.260 fused_ordering(958) 00:14:52.260 fused_ordering(959) 00:14:52.260 fused_ordering(960) 00:14:52.260 fused_ordering(961) 00:14:52.260 fused_ordering(962) 00:14:52.260 fused_ordering(963) 00:14:52.260 fused_ordering(964) 00:14:52.260 fused_ordering(965) 00:14:52.260 fused_ordering(966) 00:14:52.260 fused_ordering(967) 00:14:52.260 fused_ordering(968) 00:14:52.260 fused_ordering(969) 00:14:52.260 fused_ordering(970) 00:14:52.260 fused_ordering(971) 00:14:52.260 fused_ordering(972) 00:14:52.260 fused_ordering(973) 00:14:52.260 fused_ordering(974) 00:14:52.260 fused_ordering(975) 00:14:52.260 fused_ordering(976) 00:14:52.260 fused_ordering(977) 00:14:52.260 fused_ordering(978) 00:14:52.260 fused_ordering(979) 00:14:52.260 fused_ordering(980) 00:14:52.260 fused_ordering(981) 00:14:52.260 fused_ordering(982) 00:14:52.260 fused_ordering(983) 00:14:52.260 fused_ordering(984) 00:14:52.260 fused_ordering(985) 00:14:52.260 fused_ordering(986) 00:14:52.260 fused_ordering(987) 00:14:52.260 fused_ordering(988) 00:14:52.260 fused_ordering(989) 00:14:52.260 fused_ordering(990) 00:14:52.260 fused_ordering(991) 00:14:52.260 fused_ordering(992) 00:14:52.260 fused_ordering(993) 00:14:52.260 fused_ordering(994) 00:14:52.260 fused_ordering(995) 00:14:52.260 fused_ordering(996) 00:14:52.260 fused_ordering(997) 00:14:52.260 fused_ordering(998) 00:14:52.260 fused_ordering(999) 00:14:52.260 fused_ordering(1000) 00:14:52.260 fused_ordering(1001) 00:14:52.260 fused_ordering(1002) 00:14:52.260 fused_ordering(1003) 00:14:52.260 fused_ordering(1004) 00:14:52.260 fused_ordering(1005) 00:14:52.260 fused_ordering(1006) 00:14:52.260 fused_ordering(1007) 00:14:52.260 fused_ordering(1008) 00:14:52.260 fused_ordering(1009) 00:14:52.260 fused_ordering(1010) 00:14:52.260 fused_ordering(1011) 00:14:52.260 fused_ordering(1012) 00:14:52.260 fused_ordering(1013) 00:14:52.260 fused_ordering(1014) 00:14:52.260 fused_ordering(1015) 00:14:52.260 fused_ordering(1016) 00:14:52.260 fused_ordering(1017) 00:14:52.260 fused_ordering(1018) 00:14:52.260 fused_ordering(1019) 00:14:52.260 fused_ordering(1020) 00:14:52.260 fused_ordering(1021) 00:14:52.260 fused_ordering(1022) 00:14:52.260 fused_ordering(1023) 00:14:52.260 01:49:37 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:52.260 01:49:37 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:52.260 01:49:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:52.260 01:49:37 -- nvmf/common.sh@116 -- # sync 00:14:52.260 01:49:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:52.260 01:49:37 -- nvmf/common.sh@119 -- # set +e 00:14:52.260 01:49:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:52.260 01:49:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:52.260 rmmod nvme_tcp 00:14:52.260 rmmod nvme_fabrics 00:14:52.260 rmmod nvme_keyring 00:14:52.260 01:49:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:52.260 01:49:37 -- nvmf/common.sh@123 -- # set -e 00:14:52.260 01:49:37 -- nvmf/common.sh@124 -- # return 0 00:14:52.260 01:49:37 -- nvmf/common.sh@477 -- # '[' -n 2124560 ']' 00:14:52.260 01:49:37 -- nvmf/common.sh@478 -- # killprocess 2124560 00:14:52.260 01:49:37 -- common/autotest_common.sh@926 -- # '[' -z 2124560 ']' 00:14:52.260 01:49:37 -- common/autotest_common.sh@930 -- # kill -0 2124560 00:14:52.260 01:49:37 -- common/autotest_common.sh@931 -- # uname 00:14:52.260 01:49:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:52.260 01:49:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2124560 00:14:52.260 01:49:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:52.260 01:49:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:52.260 01:49:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2124560' 00:14:52.260 killing process with pid 2124560 00:14:52.260 01:49:37 -- common/autotest_common.sh@945 -- # kill 2124560 00:14:52.260 01:49:37 -- common/autotest_common.sh@950 -- # wait 2124560 00:14:52.519 01:49:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:52.519 01:49:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:52.519 01:49:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:52.519 01:49:37 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.520 01:49:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:52.520 01:49:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.520 01:49:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.520 01:49:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.448 01:49:40 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:14:54.448 00:14:54.448 real 0m10.681s 00:14:54.448 user 0m8.843s 00:14:54.448 sys 0m5.335s 00:14:54.448 01:49:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.448 01:49:40 -- common/autotest_common.sh@10 -- # set +x 00:14:54.448 ************************************ 00:14:54.448 END TEST nvmf_fused_ordering 00:14:54.448 ************************************ 00:14:54.448 01:49:40 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:54.448 01:49:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:54.448 01:49:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:54.448 01:49:40 -- common/autotest_common.sh@10 -- # set +x 00:14:54.448 ************************************ 00:14:54.448 START TEST nvmf_delete_subsystem 00:14:54.448 ************************************ 00:14:54.448 01:49:40 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:54.706 * Looking for test storage... 00:14:54.706 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.706 01:49:40 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.706 01:49:40 -- nvmf/common.sh@7 -- # uname -s 00:14:54.706 01:49:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.706 01:49:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.706 01:49:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.706 01:49:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.706 01:49:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.706 01:49:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.706 01:49:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.706 01:49:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.706 01:49:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.706 01:49:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.706 01:49:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.706 01:49:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.706 01:49:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.706 01:49:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.706 01:49:40 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.706 01:49:40 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.706 01:49:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.706 01:49:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.706 01:49:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.706 01:49:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.706 01:49:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.707 01:49:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.707 01:49:40 -- paths/export.sh@5 -- # export PATH 00:14:54.707 01:49:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.707 01:49:40 -- nvmf/common.sh@46 -- # : 0 00:14:54.707 01:49:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:54.707 01:49:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:54.707 01:49:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:54.707 01:49:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.707 01:49:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.707 01:49:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:54.707 01:49:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:54.707 01:49:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:54.707 01:49:40 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:54.707 01:49:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:54.707 01:49:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.707 01:49:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:54.707 01:49:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:54.707 01:49:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:54.707 01:49:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.707 01:49:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.707 01:49:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.707 01:49:40 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:54.707 01:49:40 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:54.707 01:49:40 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:54.707 01:49:40 -- common/autotest_common.sh@10 -- # set +x 00:14:56.607 01:49:42 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:56.607 01:49:42 -- nvmf/common.sh@290 -- # pci_devs=() 00:14:56.607 01:49:42 -- nvmf/common.sh@290 -- # local -a pci_devs 00:14:56.607 01:49:42 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:14:56.607 01:49:42 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:14:56.607 01:49:42 -- nvmf/common.sh@292 -- # pci_drivers=() 00:14:56.607 01:49:42 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:14:56.607 01:49:42 -- nvmf/common.sh@294 -- # net_devs=() 00:14:56.607 01:49:42 -- nvmf/common.sh@294 -- # local -ga net_devs 00:14:56.607 01:49:42 -- nvmf/common.sh@295 -- # e810=() 00:14:56.607 01:49:42 -- nvmf/common.sh@295 -- # local -ga e810 00:14:56.607 01:49:42 -- nvmf/common.sh@296 -- # x722=() 00:14:56.607 01:49:42 -- nvmf/common.sh@296 -- # local -ga x722 00:14:56.607 01:49:42 -- nvmf/common.sh@297 -- # mlx=() 00:14:56.607 01:49:42 -- nvmf/common.sh@297 -- # local -ga mlx 00:14:56.607 01:49:42 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.607 01:49:42 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.608 01:49:42 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.608 01:49:42 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.608 01:49:42 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.608 01:49:42 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.608 01:49:42 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.608 01:49:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:56.608 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:56.608 01:49:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:14:56.608 01:49:42 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:56.608 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:56.608 01:49:42 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.608 01:49:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.608 01:49:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.608 01:49:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:56.608 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:56.608 01:49:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:14:56.608 01:49:42 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.608 01:49:42 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.608 01:49:42 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:56.608 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:56.608 01:49:42 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@402 -- # is_hw=yes 00:14:56.608 01:49:42 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:14:56.608 01:49:42 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.608 01:49:42 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.608 01:49:42 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:14:56.608 01:49:42 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.608 01:49:42 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.608 01:49:42 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:14:56.608 01:49:42 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.608 01:49:42 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.608 01:49:42 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:14:56.608 01:49:42 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:14:56.608 01:49:42 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.608 01:49:42 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.608 01:49:42 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.608 01:49:42 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.608 01:49:42 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:14:56.608 01:49:42 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.608 01:49:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.608 01:49:42 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.608 01:49:42 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:14:56.608 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.608 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.177 ms 00:14:56.608 00:14:56.608 --- 10.0.0.2 ping statistics --- 00:14:56.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.608 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:14:56.608 01:49:42 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.608 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.608 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:14:56.608 00:14:56.608 --- 10.0.0.1 ping statistics --- 00:14:56.608 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.608 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:14:56.608 01:49:42 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.608 01:49:42 -- nvmf/common.sh@410 -- # return 0 00:14:56.608 01:49:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:56.608 01:49:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.608 01:49:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:56.608 01:49:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.608 01:49:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:56.608 01:49:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:56.608 01:49:42 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:56.608 01:49:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:56.608 01:49:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:56.608 01:49:42 -- common/autotest_common.sh@10 -- # set +x 00:14:56.608 01:49:42 -- nvmf/common.sh@469 -- # nvmfpid=2127209 00:14:56.608 01:49:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:56.608 01:49:42 -- nvmf/common.sh@470 -- # waitforlisten 2127209 00:14:56.608 01:49:42 -- common/autotest_common.sh@819 -- # '[' -z 2127209 ']' 00:14:56.608 01:49:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.608 01:49:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.608 01:49:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.608 01:49:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.608 01:49:42 -- common/autotest_common.sh@10 -- # set +x 00:14:56.866 [2024-04-15 01:49:42.291413] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:14:56.866 [2024-04-15 01:49:42.291498] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.866 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.866 [2024-04-15 01:49:42.358369] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:56.866 [2024-04-15 01:49:42.445155] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:56.866 [2024-04-15 01:49:42.445310] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.866 [2024-04-15 01:49:42.445338] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.866 [2024-04-15 01:49:42.445351] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.866 [2024-04-15 01:49:42.445424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.866 [2024-04-15 01:49:42.445430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.799 01:49:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:57.799 01:49:43 -- common/autotest_common.sh@852 -- # return 0 00:14:57.799 01:49:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:57.799 01:49:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 01:49:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 [2024-04-15 01:49:43.320638] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 [2024-04-15 01:49:43.336838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 NULL1 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 Delay0 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:57.799 01:49:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:57.799 01:49:43 -- common/autotest_common.sh@10 -- # set +x 00:14:57.799 01:49:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@28 -- # perf_pid=2127366 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:57.799 01:49:43 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:57.799 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.799 [2024-04-15 01:49:43.411669] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:00.326 01:49:45 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.326 01:49:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:00.326 01:49:45 -- common/autotest_common.sh@10 -- # set +x 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 starting I/O failed: -6 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 [2024-04-15 01:49:45.582114] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fedd4000c00 is same with the state(5) to be set 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Write completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.326 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Write completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 starting I/O failed: -6 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 Read completed with error (sct=0, sc=8) 00:15:00.327 [2024-04-15 01:49:45.583905] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x803820 is same with the state(5) to be set 00:15:01.261 [2024-04-15 01:49:46.553859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7e9d70 is same with the state(5) to be set 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 [2024-04-15 01:49:46.586185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x803570 is same with the state(5) to be set 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 [2024-04-15 01:49:46.586526] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8033f0 is same with the state(5) to be set 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 [2024-04-15 01:49:46.586706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fedd400c1d0 is same with the state(5) to be set 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 Read completed with error (sct=0, sc=8) 00:15:01.261 Write completed with error (sct=0, sc=8) 00:15:01.261 [2024-04-15 01:49:46.587183] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eb230 is same with the state(5) to be set 00:15:01.261 [2024-04-15 01:49:46.587929] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7e9d70 (9): Bad file descriptor 00:15:01.261 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:15:01.261 01:49:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.261 01:49:46 -- target/delete_subsystem.sh@34 -- # delay=0 00:15:01.261 01:49:46 -- target/delete_subsystem.sh@35 -- # kill -0 2127366 00:15:01.261 01:49:46 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:15:01.262 Initializing NVMe Controllers 00:15:01.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.262 Controller IO queue size 128, less than required. 00:15:01.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:01.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:01.262 Initialization complete. Launching workers. 00:15:01.262 ======================================================== 00:15:01.262 Latency(us) 00:15:01.262 Device Information : IOPS MiB/s Average min max 00:15:01.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.72 0.08 1058287.41 986.97 2002635.30 00:15:01.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.33 0.08 877820.68 368.01 1010976.92 00:15:01.262 ======================================================== 00:15:01.262 Total : 325.05 0.16 972049.11 368.01 2002635.30 00:15:01.262 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@35 -- # kill -0 2127366 00:15:01.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2127366) - No such process 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@45 -- # NOT wait 2127366 00:15:01.520 01:49:47 -- common/autotest_common.sh@640 -- # local es=0 00:15:01.520 01:49:47 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 2127366 00:15:01.520 01:49:47 -- common/autotest_common.sh@628 -- # local arg=wait 00:15:01.520 01:49:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:01.520 01:49:47 -- common/autotest_common.sh@632 -- # type -t wait 00:15:01.520 01:49:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:15:01.520 01:49:47 -- common/autotest_common.sh@643 -- # wait 2127366 00:15:01.520 01:49:47 -- common/autotest_common.sh@643 -- # es=1 00:15:01.520 01:49:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:15:01.520 01:49:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:15:01.520 01:49:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:01.520 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.520 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:15:01.520 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.520 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.520 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:15:01.520 [2024-04-15 01:49:47.110762] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.520 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:15:01.520 01:49:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.520 01:49:47 -- common/autotest_common.sh@10 -- # set +x 00:15:01.520 01:49:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@54 -- # perf_pid=2127905 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@56 -- # delay=0 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:01.520 01:49:47 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:15:01.520 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.777 [2024-04-15 01:49:47.174121] subsystem.c:1304:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:15:02.035 01:49:47 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.035 01:49:47 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:02.035 01:49:47 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:02.601 01:49:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:02.601 01:49:48 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:02.601 01:49:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.166 01:49:48 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.166 01:49:48 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:03.166 01:49:48 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:03.730 01:49:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:03.730 01:49:49 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:03.730 01:49:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.294 01:49:49 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.294 01:49:49 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:04.294 01:49:49 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.552 01:49:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:04.552 01:49:50 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:04.552 01:49:50 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:15:04.810 Initializing NVMe Controllers 00:15:04.810 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:04.810 Controller IO queue size 128, less than required. 00:15:04.810 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:04.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:15:04.810 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:15:04.810 Initialization complete. Launching workers. 00:15:04.810 ======================================================== 00:15:04.810 Latency(us) 00:15:04.810 Device Information : IOPS MiB/s Average min max 00:15:04.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004395.34 1000294.08 1012148.93 00:15:04.810 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004488.50 1000346.08 1012964.74 00:15:04.810 ======================================================== 00:15:04.810 Total : 256.00 0.12 1004441.92 1000294.08 1012964.74 00:15:04.810 00:15:05.068 01:49:50 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:15:05.068 01:49:50 -- target/delete_subsystem.sh@57 -- # kill -0 2127905 00:15:05.068 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2127905) - No such process 00:15:05.068 01:49:50 -- target/delete_subsystem.sh@67 -- # wait 2127905 00:15:05.068 01:49:50 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:15:05.068 01:49:50 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:15:05.068 01:49:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:05.068 01:49:50 -- nvmf/common.sh@116 -- # sync 00:15:05.068 01:49:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:05.068 01:49:50 -- nvmf/common.sh@119 -- # set +e 00:15:05.068 01:49:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:05.068 01:49:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:05.068 rmmod nvme_tcp 00:15:05.068 rmmod nvme_fabrics 00:15:05.068 rmmod nvme_keyring 00:15:05.068 01:49:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:05.068 01:49:50 -- nvmf/common.sh@123 -- # set -e 00:15:05.068 01:49:50 -- nvmf/common.sh@124 -- # return 0 00:15:05.068 01:49:50 -- nvmf/common.sh@477 -- # '[' -n 2127209 ']' 00:15:05.068 01:49:50 -- nvmf/common.sh@478 -- # killprocess 2127209 00:15:05.068 01:49:50 -- common/autotest_common.sh@926 -- # '[' -z 2127209 ']' 00:15:05.068 01:49:50 -- common/autotest_common.sh@930 -- # kill -0 2127209 00:15:05.068 01:49:50 -- common/autotest_common.sh@931 -- # uname 00:15:05.068 01:49:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:05.068 01:49:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2127209 00:15:05.328 01:49:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:05.328 01:49:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:05.328 01:49:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2127209' 00:15:05.328 killing process with pid 2127209 00:15:05.328 01:49:50 -- common/autotest_common.sh@945 -- # kill 2127209 00:15:05.328 01:49:50 -- common/autotest_common.sh@950 -- # wait 2127209 00:15:05.328 01:49:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:05.328 01:49:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:05.328 01:49:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:05.328 01:49:50 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:05.328 01:49:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:05.328 01:49:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.328 01:49:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:05.328 01:49:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.900 01:49:52 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:07.900 00:15:07.900 real 0m12.918s 00:15:07.900 user 0m29.235s 00:15:07.900 sys 0m3.095s 00:15:07.900 01:49:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:07.900 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:15:07.900 ************************************ 00:15:07.900 END TEST nvmf_delete_subsystem 00:15:07.900 ************************************ 00:15:07.900 01:49:52 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:15:07.900 01:49:52 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.900 01:49:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:07.900 01:49:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:07.900 01:49:52 -- common/autotest_common.sh@10 -- # set +x 00:15:07.900 ************************************ 00:15:07.900 START TEST nvmf_nvme_cli 00:15:07.900 ************************************ 00:15:07.900 01:49:52 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:15:07.900 * Looking for test storage... 00:15:07.900 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.900 01:49:53 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.900 01:49:53 -- nvmf/common.sh@7 -- # uname -s 00:15:07.900 01:49:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.900 01:49:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.900 01:49:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.900 01:49:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.900 01:49:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:07.900 01:49:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:07.900 01:49:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.900 01:49:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:07.900 01:49:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.900 01:49:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:07.900 01:49:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.900 01:49:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:07.900 01:49:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.900 01:49:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:07.900 01:49:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:07.900 01:49:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.900 01:49:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.900 01:49:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.900 01:49:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.900 01:49:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.900 01:49:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.900 01:49:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.900 01:49:53 -- paths/export.sh@5 -- # export PATH 00:15:07.900 01:49:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.900 01:49:53 -- nvmf/common.sh@46 -- # : 0 00:15:07.900 01:49:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:07.900 01:49:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:07.900 01:49:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:07.900 01:49:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.900 01:49:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.900 01:49:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:07.900 01:49:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:07.900 01:49:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:07.901 01:49:53 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:07.901 01:49:53 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:07.901 01:49:53 -- target/nvme_cli.sh@14 -- # devs=() 00:15:07.901 01:49:53 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:15:07.901 01:49:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:07.901 01:49:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.901 01:49:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:07.901 01:49:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:07.901 01:49:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:07.901 01:49:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:07.901 01:49:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:07.901 01:49:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:07.901 01:49:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:07.901 01:49:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:07.901 01:49:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:07.901 01:49:53 -- common/autotest_common.sh@10 -- # set +x 00:15:09.805 01:49:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:09.805 01:49:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:09.805 01:49:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:09.805 01:49:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:09.805 01:49:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:09.805 01:49:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:09.805 01:49:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:09.805 01:49:54 -- nvmf/common.sh@294 -- # net_devs=() 00:15:09.805 01:49:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:09.805 01:49:54 -- nvmf/common.sh@295 -- # e810=() 00:15:09.805 01:49:54 -- nvmf/common.sh@295 -- # local -ga e810 00:15:09.805 01:49:54 -- nvmf/common.sh@296 -- # x722=() 00:15:09.805 01:49:54 -- nvmf/common.sh@296 -- # local -ga x722 00:15:09.805 01:49:54 -- nvmf/common.sh@297 -- # mlx=() 00:15:09.805 01:49:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:09.805 01:49:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:09.805 01:49:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:09.805 01:49:54 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:15:09.805 01:49:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:09.805 01:49:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:09.805 01:49:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:15:09.805 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:15:09.805 01:49:54 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:09.805 01:49:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:09.805 01:49:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:15:09.805 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:15:09.805 01:49:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:09.805 01:49:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:09.805 01:49:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.805 01:49:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:09.805 01:49:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.805 01:49:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:15:09.805 Found net devices under 0000:0a:00.0: cvl_0_0 00:15:09.805 01:49:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.805 01:49:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:09.805 01:49:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:09.805 01:49:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:09.805 01:49:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:09.805 01:49:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:15:09.805 Found net devices under 0000:0a:00.1: cvl_0_1 00:15:09.805 01:49:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:09.805 01:49:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:09.805 01:49:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:09.805 01:49:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:15:09.805 01:49:55 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:09.805 01:49:55 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:09.805 01:49:55 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:09.805 01:49:55 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:15:09.805 01:49:55 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:09.805 01:49:55 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:09.805 01:49:55 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:15:09.805 01:49:55 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:09.805 01:49:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:09.805 01:49:55 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:15:09.805 01:49:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:15:09.805 01:49:55 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:15:09.805 01:49:55 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:09.805 01:49:55 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:09.805 01:49:55 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:09.805 01:49:55 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:15:09.805 01:49:55 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:09.805 01:49:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:09.805 01:49:55 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:09.805 01:49:55 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:15:09.805 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:09.805 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:15:09.805 00:15:09.805 --- 10.0.0.2 ping statistics --- 00:15:09.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.805 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:15:09.805 01:49:55 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:09.805 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:09.805 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:15:09.805 00:15:09.805 --- 10.0.0.1 ping statistics --- 00:15:09.805 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:09.805 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:09.805 01:49:55 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:09.805 01:49:55 -- nvmf/common.sh@410 -- # return 0 00:15:09.805 01:49:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:09.805 01:49:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:09.805 01:49:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:09.805 01:49:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:09.805 01:49:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:09.805 01:49:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:09.805 01:49:55 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:15:09.805 01:49:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:09.805 01:49:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:09.805 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:15:09.805 01:49:55 -- nvmf/common.sh@469 -- # nvmfpid=2130267 00:15:09.805 01:49:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:09.805 01:49:55 -- nvmf/common.sh@470 -- # waitforlisten 2130267 00:15:09.805 01:49:55 -- common/autotest_common.sh@819 -- # '[' -z 2130267 ']' 00:15:09.805 01:49:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:09.805 01:49:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:09.805 01:49:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:09.805 01:49:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:09.805 01:49:55 -- common/autotest_common.sh@10 -- # set +x 00:15:09.805 [2024-04-15 01:49:55.211676] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:09.805 [2024-04-15 01:49:55.211777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.805 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.806 [2024-04-15 01:49:55.283496] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.806 [2024-04-15 01:49:55.374756] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.806 [2024-04-15 01:49:55.374939] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.806 [2024-04-15 01:49:55.374959] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.806 [2024-04-15 01:49:55.374973] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.806 [2024-04-15 01:49:55.375054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.806 [2024-04-15 01:49:55.375099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.806 [2024-04-15 01:49:55.375214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.806 [2024-04-15 01:49:55.375217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.754 01:49:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:10.754 01:49:56 -- common/autotest_common.sh@852 -- # return 0 00:15:10.754 01:49:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.754 01:49:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 01:49:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.754 01:49:56 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 [2024-04-15 01:49:56.185625] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 Malloc0 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 Malloc1 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 [2024-04-15 01:49:56.266767] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:10.754 01:49:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:10.754 01:49:56 -- common/autotest_common.sh@10 -- # set +x 00:15:10.754 01:49:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:10.754 01:49:56 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:15:10.754 00:15:10.754 Discovery Log Number of Records 2, Generation counter 2 00:15:10.754 =====Discovery Log Entry 0====== 00:15:10.754 trtype: tcp 00:15:10.754 adrfam: ipv4 00:15:10.754 subtype: current discovery subsystem 00:15:10.754 treq: not required 00:15:10.754 portid: 0 00:15:10.754 trsvcid: 4420 00:15:10.754 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:15:10.754 traddr: 10.0.0.2 00:15:10.754 eflags: explicit discovery connections, duplicate discovery information 00:15:10.754 sectype: none 00:15:10.754 =====Discovery Log Entry 1====== 00:15:10.754 trtype: tcp 00:15:10.754 adrfam: ipv4 00:15:10.754 subtype: nvme subsystem 00:15:10.754 treq: not required 00:15:10.754 portid: 0 00:15:10.754 trsvcid: 4420 00:15:10.754 subnqn: nqn.2016-06.io.spdk:cnode1 00:15:10.754 traddr: 10.0.0.2 00:15:10.754 eflags: none 00:15:10.754 sectype: none 00:15:10.754 01:49:56 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:15:10.754 01:49:56 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:15:10.754 01:49:56 -- nvmf/common.sh@510 -- # local dev _ 00:15:10.754 01:49:56 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.754 01:49:56 -- nvmf/common.sh@509 -- # nvme list 00:15:10.754 01:49:56 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:10.754 01:49:56 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.754 01:49:56 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:10.754 01:49:56 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:10.754 01:49:56 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:15:10.754 01:49:56 -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:11.689 01:49:57 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:11.689 01:49:57 -- common/autotest_common.sh@1177 -- # local i=0 00:15:11.689 01:49:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.689 01:49:57 -- common/autotest_common.sh@1179 -- # [[ -n 2 ]] 00:15:11.689 01:49:57 -- common/autotest_common.sh@1180 -- # nvme_device_counter=2 00:15:11.689 01:49:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:13.586 01:49:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:13.586 01:49:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:13.586 01:49:59 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.586 01:49:59 -- common/autotest_common.sh@1186 -- # nvme_devices=2 00:15:13.586 01:49:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.586 01:49:59 -- common/autotest_common.sh@1187 -- # return 0 00:15:13.586 01:49:59 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:13.586 01:49:59 -- nvmf/common.sh@510 -- # local dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@509 -- # nvme list 00:15:13.586 01:49:59 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:13.586 01:49:59 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:13.586 01:49:59 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:13.586 /dev/nvme0n1 ]] 00:15:13.586 01:49:59 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:13.586 01:49:59 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:13.586 01:49:59 -- nvmf/common.sh@510 -- # local dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.586 01:49:59 -- nvmf/common.sh@509 -- # nvme list 00:15:13.844 01:49:59 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:15:13.844 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.844 01:49:59 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:15:13.844 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.844 01:49:59 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:13.844 01:49:59 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:15:13.844 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.844 01:49:59 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:13.844 01:49:59 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:15:13.844 01:49:59 -- nvmf/common.sh@512 -- # read -r dev _ 00:15:13.844 01:49:59 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:13.844 01:49:59 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.102 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.102 01:49:59 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.102 01:49:59 -- common/autotest_common.sh@1198 -- # local i=0 00:15:14.102 01:49:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:14.102 01:49:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.102 01:49:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:14.102 01:49:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.102 01:49:59 -- common/autotest_common.sh@1210 -- # return 0 00:15:14.102 01:49:59 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:14.102 01:49:59 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:14.102 01:49:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:14.102 01:49:59 -- common/autotest_common.sh@10 -- # set +x 00:15:14.102 01:49:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:14.102 01:49:59 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:14.102 01:49:59 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:14.102 01:49:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:14.102 01:49:59 -- nvmf/common.sh@116 -- # sync 00:15:14.102 01:49:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:14.102 01:49:59 -- nvmf/common.sh@119 -- # set +e 00:15:14.102 01:49:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:14.102 01:49:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:14.102 rmmod nvme_tcp 00:15:14.102 rmmod nvme_fabrics 00:15:14.102 rmmod nvme_keyring 00:15:14.102 01:49:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:14.102 01:49:59 -- nvmf/common.sh@123 -- # set -e 00:15:14.102 01:49:59 -- nvmf/common.sh@124 -- # return 0 00:15:14.102 01:49:59 -- nvmf/common.sh@477 -- # '[' -n 2130267 ']' 00:15:14.102 01:49:59 -- nvmf/common.sh@478 -- # killprocess 2130267 00:15:14.102 01:49:59 -- common/autotest_common.sh@926 -- # '[' -z 2130267 ']' 00:15:14.102 01:49:59 -- common/autotest_common.sh@930 -- # kill -0 2130267 00:15:14.102 01:49:59 -- common/autotest_common.sh@931 -- # uname 00:15:14.102 01:49:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:14.102 01:49:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2130267 00:15:14.102 01:49:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:14.102 01:49:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:14.102 01:49:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2130267' 00:15:14.102 killing process with pid 2130267 00:15:14.102 01:49:59 -- common/autotest_common.sh@945 -- # kill 2130267 00:15:14.102 01:49:59 -- common/autotest_common.sh@950 -- # wait 2130267 00:15:14.361 01:49:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:14.361 01:49:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:14.361 01:49:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:14.361 01:49:59 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:14.361 01:49:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:14.361 01:49:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:14.361 01:49:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:14.361 01:49:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:16.889 01:50:02 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:15:16.889 00:15:16.889 real 0m9.032s 00:15:16.889 user 0m18.965s 00:15:16.889 sys 0m2.211s 00:15:16.889 01:50:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.889 01:50:02 -- common/autotest_common.sh@10 -- # set +x 00:15:16.889 ************************************ 00:15:16.889 END TEST nvmf_nvme_cli 00:15:16.889 ************************************ 00:15:16.889 01:50:02 -- nvmf/nvmf.sh@39 -- # [[ 1 -eq 1 ]] 00:15:16.889 01:50:02 -- nvmf/nvmf.sh@40 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.889 01:50:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:16.889 01:50:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:16.889 01:50:02 -- common/autotest_common.sh@10 -- # set +x 00:15:16.889 ************************************ 00:15:16.889 START TEST nvmf_vfio_user 00:15:16.889 ************************************ 00:15:16.889 01:50:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:16.889 * Looking for test storage... 00:15:16.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:16.889 01:50:02 -- nvmf/common.sh@7 -- # uname -s 00:15:16.889 01:50:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:16.889 01:50:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:16.889 01:50:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:16.889 01:50:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:16.889 01:50:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:16.889 01:50:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:16.889 01:50:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:16.889 01:50:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:16.889 01:50:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:16.889 01:50:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:16.889 01:50:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.889 01:50:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:16.889 01:50:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:16.889 01:50:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:16.889 01:50:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:16.889 01:50:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:16.889 01:50:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:16.889 01:50:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:16.889 01:50:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:16.889 01:50:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.889 01:50:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.889 01:50:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.889 01:50:02 -- paths/export.sh@5 -- # export PATH 00:15:16.889 01:50:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:16.889 01:50:02 -- nvmf/common.sh@46 -- # : 0 00:15:16.889 01:50:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:16.889 01:50:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:16.889 01:50:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:16.889 01:50:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:16.889 01:50:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:16.889 01:50:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:16.889 01:50:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:16.889 01:50:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2131229 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2131229' 00:15:16.889 Process pid: 2131229 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:16.889 01:50:02 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2131229 00:15:16.889 01:50:02 -- common/autotest_common.sh@819 -- # '[' -z 2131229 ']' 00:15:16.889 01:50:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:16.889 01:50:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:16.889 01:50:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:16.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:16.889 01:50:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:16.889 01:50:02 -- common/autotest_common.sh@10 -- # set +x 00:15:16.890 [2024-04-15 01:50:02.161369] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:16.890 [2024-04-15 01:50:02.161488] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:16.890 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.890 [2024-04-15 01:50:02.223940] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:16.890 [2024-04-15 01:50:02.311627] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:16.890 [2024-04-15 01:50:02.311779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:16.890 [2024-04-15 01:50:02.311796] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:16.890 [2024-04-15 01:50:02.311808] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:16.890 [2024-04-15 01:50:02.311953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:16.890 [2024-04-15 01:50:02.312020] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:16.890 [2024-04-15 01:50:02.312087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:16.890 [2024-04-15 01:50:02.312090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.821 01:50:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:17.821 01:50:03 -- common/autotest_common.sh@852 -- # return 0 00:15:17.821 01:50:03 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:18.753 01:50:04 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:19.011 01:50:04 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:19.011 01:50:04 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:19.011 01:50:04 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.011 01:50:04 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:19.011 01:50:04 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:19.269 Malloc1 00:15:19.269 01:50:04 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:19.527 01:50:04 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:19.784 01:50:05 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:19.784 01:50:05 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:19.784 01:50:05 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:19.784 01:50:05 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:20.041 Malloc2 00:15:20.041 01:50:05 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:20.298 01:50:05 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:20.556 01:50:06 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:20.813 01:50:06 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:20.813 [2024-04-15 01:50:06.451156] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:20.814 [2024-04-15 01:50:06.451202] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2131793 ] 00:15:21.073 EAL: No free 2048 kB hugepages reported on node 1 00:15:21.073 [2024-04-15 01:50:06.486481] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:21.073 [2024-04-15 01:50:06.496181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.073 [2024-04-15 01:50:06.496210] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7207e0a000 00:15:21.073 [2024-04-15 01:50:06.497181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.498178] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.499182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.500188] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.501191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.502203] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.503206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.508070] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:21.073 [2024-04-15 01:50:06.508226] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:21.073 [2024-04-15 01:50:06.508246] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7206bbe000 00:15:21.073 [2024-04-15 01:50:06.509384] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.073 [2024-04-15 01:50:06.521001] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:21.073 [2024-04-15 01:50:06.521039] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:21.073 [2024-04-15 01:50:06.527361] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.073 [2024-04-15 01:50:06.527416] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:21.073 [2024-04-15 01:50:06.527506] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:21.073 [2024-04-15 01:50:06.527534] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:21.073 [2024-04-15 01:50:06.527544] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:21.073 [2024-04-15 01:50:06.528351] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:21.073 [2024-04-15 01:50:06.528375] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:21.073 [2024-04-15 01:50:06.528388] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:21.073 [2024-04-15 01:50:06.529340] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:21.073 [2024-04-15 01:50:06.529378] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:21.073 [2024-04-15 01:50:06.529393] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.530362] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:21.073 [2024-04-15 01:50:06.530380] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.531361] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:21.073 [2024-04-15 01:50:06.531378] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:21.073 [2024-04-15 01:50:06.531387] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.531398] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.531507] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:21.073 [2024-04-15 01:50:06.531515] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.531523] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:21.073 [2024-04-15 01:50:06.532371] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:21.073 [2024-04-15 01:50:06.533369] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:21.073 [2024-04-15 01:50:06.534376] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.073 [2024-04-15 01:50:06.535431] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:21.073 [2024-04-15 01:50:06.536398] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:21.073 [2024-04-15 01:50:06.536415] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:21.073 [2024-04-15 01:50:06.536423] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:21.073 [2024-04-15 01:50:06.536447] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:21.074 [2024-04-15 01:50:06.536460] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536485] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.074 [2024-04-15 01:50:06.536495] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.074 [2024-04-15 01:50:06.536514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.536595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.536610] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:21.074 [2024-04-15 01:50:06.536622] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:21.074 [2024-04-15 01:50:06.536630] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:21.074 [2024-04-15 01:50:06.536638] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:21.074 [2024-04-15 01:50:06.536645] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:21.074 [2024-04-15 01:50:06.536652] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:21.074 [2024-04-15 01:50:06.536660] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536675] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536690] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.536708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.536724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.074 [2024-04-15 01:50:06.536735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.074 [2024-04-15 01:50:06.536747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.074 [2024-04-15 01:50:06.536758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:21.074 [2024-04-15 01:50:06.536766] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536780] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536793] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.536804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.536814] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:21.074 [2024-04-15 01:50:06.536822] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536832] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536850] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536864] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.536881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.536930] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536943] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.536955] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:21.074 [2024-04-15 01:50:06.536963] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:21.074 [2024-04-15 01:50:06.536973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.536986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537001] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:21.074 [2024-04-15 01:50:06.537019] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537058] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537072] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.074 [2024-04-15 01:50:06.537080] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.074 [2024-04-15 01:50:06.537090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537139] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537152] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537164] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:21.074 [2024-04-15 01:50:06.537172] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.074 [2024-04-15 01:50:06.537181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537205] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537216] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537230] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537240] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537251] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537260] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:21.074 [2024-04-15 01:50:06.537268] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:21.074 [2024-04-15 01:50:06.537276] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:21.074 [2024-04-15 01:50:06.537303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537353] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537380] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537444] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:21.074 [2024-04-15 01:50:06.537453] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:21.074 [2024-04-15 01:50:06.537459] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:21.074 [2024-04-15 01:50:06.537465] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:21.074 [2024-04-15 01:50:06.537473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:21.074 [2024-04-15 01:50:06.537485] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:21.074 [2024-04-15 01:50:06.537492] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:21.074 [2024-04-15 01:50:06.537501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537511] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:21.074 [2024-04-15 01:50:06.537519] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:21.074 [2024-04-15 01:50:06.537527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537539] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:21.074 [2024-04-15 01:50:06.537546] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:21.074 [2024-04-15 01:50:06.537555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:21.074 [2024-04-15 01:50:06.537566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:21.074 [2024-04-15 01:50:06.537603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:21.075 [2024-04-15 01:50:06.537614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:21.075 ===================================================== 00:15:21.075 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:21.075 ===================================================== 00:15:21.075 Controller Capabilities/Features 00:15:21.075 ================================ 00:15:21.075 Vendor ID: 4e58 00:15:21.075 Subsystem Vendor ID: 4e58 00:15:21.075 Serial Number: SPDK1 00:15:21.075 Model Number: SPDK bdev Controller 00:15:21.075 Firmware Version: 24.01.1 00:15:21.075 Recommended Arb Burst: 6 00:15:21.075 IEEE OUI Identifier: 8d 6b 50 00:15:21.075 Multi-path I/O 00:15:21.075 May have multiple subsystem ports: Yes 00:15:21.075 May have multiple controllers: Yes 00:15:21.075 Associated with SR-IOV VF: No 00:15:21.075 Max Data Transfer Size: 131072 00:15:21.075 Max Number of Namespaces: 32 00:15:21.075 Max Number of I/O Queues: 127 00:15:21.075 NVMe Specification Version (VS): 1.3 00:15:21.075 NVMe Specification Version (Identify): 1.3 00:15:21.075 Maximum Queue Entries: 256 00:15:21.075 Contiguous Queues Required: Yes 00:15:21.075 Arbitration Mechanisms Supported 00:15:21.075 Weighted Round Robin: Not Supported 00:15:21.075 Vendor Specific: Not Supported 00:15:21.075 Reset Timeout: 15000 ms 00:15:21.075 Doorbell Stride: 4 bytes 00:15:21.075 NVM Subsystem Reset: Not Supported 00:15:21.075 Command Sets Supported 00:15:21.075 NVM Command Set: Supported 00:15:21.075 Boot Partition: Not Supported 00:15:21.075 Memory Page Size Minimum: 4096 bytes 00:15:21.075 Memory Page Size Maximum: 4096 bytes 00:15:21.075 Persistent Memory Region: Not Supported 00:15:21.075 Optional Asynchronous Events Supported 00:15:21.075 Namespace Attribute Notices: Supported 00:15:21.075 Firmware Activation Notices: Not Supported 00:15:21.075 ANA Change Notices: Not Supported 00:15:21.075 PLE Aggregate Log Change Notices: Not Supported 00:15:21.075 LBA Status Info Alert Notices: Not Supported 00:15:21.075 EGE Aggregate Log Change Notices: Not Supported 00:15:21.075 Normal NVM Subsystem Shutdown event: Not Supported 00:15:21.075 Zone Descriptor Change Notices: Not Supported 00:15:21.075 Discovery Log Change Notices: Not Supported 00:15:21.075 Controller Attributes 00:15:21.075 128-bit Host Identifier: Supported 00:15:21.075 Non-Operational Permissive Mode: Not Supported 00:15:21.075 NVM Sets: Not Supported 00:15:21.075 Read Recovery Levels: Not Supported 00:15:21.075 Endurance Groups: Not Supported 00:15:21.075 Predictable Latency Mode: Not Supported 00:15:21.075 Traffic Based Keep ALive: Not Supported 00:15:21.075 Namespace Granularity: Not Supported 00:15:21.075 SQ Associations: Not Supported 00:15:21.075 UUID List: Not Supported 00:15:21.075 Multi-Domain Subsystem: Not Supported 00:15:21.075 Fixed Capacity Management: Not Supported 00:15:21.075 Variable Capacity Management: Not Supported 00:15:21.075 Delete Endurance Group: Not Supported 00:15:21.075 Delete NVM Set: Not Supported 00:15:21.075 Extended LBA Formats Supported: Not Supported 00:15:21.075 Flexible Data Placement Supported: Not Supported 00:15:21.075 00:15:21.075 Controller Memory Buffer Support 00:15:21.075 ================================ 00:15:21.075 Supported: No 00:15:21.075 00:15:21.075 Persistent Memory Region Support 00:15:21.075 ================================ 00:15:21.075 Supported: No 00:15:21.075 00:15:21.075 Admin Command Set Attributes 00:15:21.075 ============================ 00:15:21.075 Security Send/Receive: Not Supported 00:15:21.075 Format NVM: Not Supported 00:15:21.075 Firmware Activate/Download: Not Supported 00:15:21.075 Namespace Management: Not Supported 00:15:21.075 Device Self-Test: Not Supported 00:15:21.075 Directives: Not Supported 00:15:21.075 NVMe-MI: Not Supported 00:15:21.075 Virtualization Management: Not Supported 00:15:21.075 Doorbell Buffer Config: Not Supported 00:15:21.075 Get LBA Status Capability: Not Supported 00:15:21.075 Command & Feature Lockdown Capability: Not Supported 00:15:21.075 Abort Command Limit: 4 00:15:21.075 Async Event Request Limit: 4 00:15:21.075 Number of Firmware Slots: N/A 00:15:21.075 Firmware Slot 1 Read-Only: N/A 00:15:21.075 Firmware Activation Without Reset: N/A 00:15:21.075 Multiple Update Detection Support: N/A 00:15:21.075 Firmware Update Granularity: No Information Provided 00:15:21.075 Per-Namespace SMART Log: No 00:15:21.075 Asymmetric Namespace Access Log Page: Not Supported 00:15:21.075 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:21.075 Command Effects Log Page: Supported 00:15:21.075 Get Log Page Extended Data: Supported 00:15:21.075 Telemetry Log Pages: Not Supported 00:15:21.075 Persistent Event Log Pages: Not Supported 00:15:21.075 Supported Log Pages Log Page: May Support 00:15:21.075 Commands Supported & Effects Log Page: Not Supported 00:15:21.075 Feature Identifiers & Effects Log Page:May Support 00:15:21.075 NVMe-MI Commands & Effects Log Page: May Support 00:15:21.075 Data Area 4 for Telemetry Log: Not Supported 00:15:21.075 Error Log Page Entries Supported: 128 00:15:21.075 Keep Alive: Supported 00:15:21.075 Keep Alive Granularity: 10000 ms 00:15:21.075 00:15:21.075 NVM Command Set Attributes 00:15:21.075 ========================== 00:15:21.075 Submission Queue Entry Size 00:15:21.075 Max: 64 00:15:21.075 Min: 64 00:15:21.075 Completion Queue Entry Size 00:15:21.075 Max: 16 00:15:21.075 Min: 16 00:15:21.075 Number of Namespaces: 32 00:15:21.075 Compare Command: Supported 00:15:21.075 Write Uncorrectable Command: Not Supported 00:15:21.075 Dataset Management Command: Supported 00:15:21.075 Write Zeroes Command: Supported 00:15:21.075 Set Features Save Field: Not Supported 00:15:21.075 Reservations: Not Supported 00:15:21.075 Timestamp: Not Supported 00:15:21.075 Copy: Supported 00:15:21.075 Volatile Write Cache: Present 00:15:21.075 Atomic Write Unit (Normal): 1 00:15:21.075 Atomic Write Unit (PFail): 1 00:15:21.075 Atomic Compare & Write Unit: 1 00:15:21.075 Fused Compare & Write: Supported 00:15:21.075 Scatter-Gather List 00:15:21.075 SGL Command Set: Supported (Dword aligned) 00:15:21.075 SGL Keyed: Not Supported 00:15:21.075 SGL Bit Bucket Descriptor: Not Supported 00:15:21.075 SGL Metadata Pointer: Not Supported 00:15:21.075 Oversized SGL: Not Supported 00:15:21.075 SGL Metadata Address: Not Supported 00:15:21.075 SGL Offset: Not Supported 00:15:21.075 Transport SGL Data Block: Not Supported 00:15:21.075 Replay Protected Memory Block: Not Supported 00:15:21.075 00:15:21.075 Firmware Slot Information 00:15:21.075 ========================= 00:15:21.075 Active slot: 1 00:15:21.075 Slot 1 Firmware Revision: 24.01.1 00:15:21.075 00:15:21.075 00:15:21.075 Commands Supported and Effects 00:15:21.075 ============================== 00:15:21.075 Admin Commands 00:15:21.075 -------------- 00:15:21.075 Get Log Page (02h): Supported 00:15:21.075 Identify (06h): Supported 00:15:21.075 Abort (08h): Supported 00:15:21.075 Set Features (09h): Supported 00:15:21.075 Get Features (0Ah): Supported 00:15:21.075 Asynchronous Event Request (0Ch): Supported 00:15:21.075 Keep Alive (18h): Supported 00:15:21.075 I/O Commands 00:15:21.075 ------------ 00:15:21.075 Flush (00h): Supported LBA-Change 00:15:21.075 Write (01h): Supported LBA-Change 00:15:21.075 Read (02h): Supported 00:15:21.075 Compare (05h): Supported 00:15:21.075 Write Zeroes (08h): Supported LBA-Change 00:15:21.075 Dataset Management (09h): Supported LBA-Change 00:15:21.075 Copy (19h): Supported LBA-Change 00:15:21.075 Unknown (79h): Supported LBA-Change 00:15:21.075 Unknown (7Ah): Supported 00:15:21.075 00:15:21.075 Error Log 00:15:21.075 ========= 00:15:21.075 00:15:21.075 Arbitration 00:15:21.075 =========== 00:15:21.075 Arbitration Burst: 1 00:15:21.075 00:15:21.075 Power Management 00:15:21.075 ================ 00:15:21.075 Number of Power States: 1 00:15:21.075 Current Power State: Power State #0 00:15:21.075 Power State #0: 00:15:21.075 Max Power: 0.00 W 00:15:21.075 Non-Operational State: Operational 00:15:21.075 Entry Latency: Not Reported 00:15:21.075 Exit Latency: Not Reported 00:15:21.075 Relative Read Throughput: 0 00:15:21.075 Relative Read Latency: 0 00:15:21.075 Relative Write Throughput: 0 00:15:21.075 Relative Write Latency: 0 00:15:21.075 Idle Power: Not Reported 00:15:21.075 Active Power: Not Reported 00:15:21.075 Non-Operational Permissive Mode: Not Supported 00:15:21.075 00:15:21.075 Health Information 00:15:21.075 ================== 00:15:21.075 Critical Warnings: 00:15:21.075 Available Spare Space: OK 00:15:21.076 Temperature: OK 00:15:21.076 Device Reliability: OK 00:15:21.076 Read Only: No 00:15:21.076 Volatile Memory Backup: OK 00:15:21.076 Current Temperature: 0 Kelvin[2024-04-15 01:50:06.537744] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:21.076 [2024-04-15 01:50:06.537759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:21.076 [2024-04-15 01:50:06.537800] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:21.076 [2024-04-15 01:50:06.537816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.076 [2024-04-15 01:50:06.537826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.076 [2024-04-15 01:50:06.537836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.076 [2024-04-15 01:50:06.537845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:21.076 [2024-04-15 01:50:06.541058] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:21.076 [2024-04-15 01:50:06.541080] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:21.076 [2024-04-15 01:50:06.541468] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:21.076 [2024-04-15 01:50:06.541481] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:21.076 [2024-04-15 01:50:06.542440] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:21.076 [2024-04-15 01:50:06.542462] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:21.076 [2024-04-15 01:50:06.542513] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:21.076 [2024-04-15 01:50:06.544494] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:21.076 (-273 Celsius) 00:15:21.076 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:21.076 Available Spare: 0% 00:15:21.076 Available Spare Threshold: 0% 00:15:21.076 Life Percentage Used: 0% 00:15:21.076 Data Units Read: 0 00:15:21.076 Data Units Written: 0 00:15:21.076 Host Read Commands: 0 00:15:21.076 Host Write Commands: 0 00:15:21.076 Controller Busy Time: 0 minutes 00:15:21.076 Power Cycles: 0 00:15:21.076 Power On Hours: 0 hours 00:15:21.076 Unsafe Shutdowns: 0 00:15:21.076 Unrecoverable Media Errors: 0 00:15:21.076 Lifetime Error Log Entries: 0 00:15:21.076 Warning Temperature Time: 0 minutes 00:15:21.076 Critical Temperature Time: 0 minutes 00:15:21.076 00:15:21.076 Number of Queues 00:15:21.076 ================ 00:15:21.076 Number of I/O Submission Queues: 127 00:15:21.076 Number of I/O Completion Queues: 127 00:15:21.076 00:15:21.076 Active Namespaces 00:15:21.076 ================= 00:15:21.076 Namespace ID:1 00:15:21.076 Error Recovery Timeout: Unlimited 00:15:21.076 Command Set Identifier: NVM (00h) 00:15:21.076 Deallocate: Supported 00:15:21.076 Deallocated/Unwritten Error: Not Supported 00:15:21.076 Deallocated Read Value: Unknown 00:15:21.076 Deallocate in Write Zeroes: Not Supported 00:15:21.076 Deallocated Guard Field: 0xFFFF 00:15:21.076 Flush: Supported 00:15:21.076 Reservation: Supported 00:15:21.076 Namespace Sharing Capabilities: Multiple Controllers 00:15:21.076 Size (in LBAs): 131072 (0GiB) 00:15:21.076 Capacity (in LBAs): 131072 (0GiB) 00:15:21.076 Utilization (in LBAs): 131072 (0GiB) 00:15:21.076 NGUID: 78BDAADFE3B344F0B23DF48428C5B6A8 00:15:21.076 UUID: 78bdaadf-e3b3-44f0-b23d-f48428c5b6a8 00:15:21.076 Thin Provisioning: Not Supported 00:15:21.076 Per-NS Atomic Units: Yes 00:15:21.076 Atomic Boundary Size (Normal): 0 00:15:21.076 Atomic Boundary Size (PFail): 0 00:15:21.076 Atomic Boundary Offset: 0 00:15:21.076 Maximum Single Source Range Length: 65535 00:15:21.076 Maximum Copy Length: 65535 00:15:21.076 Maximum Source Range Count: 1 00:15:21.076 NGUID/EUI64 Never Reused: No 00:15:21.076 Namespace Write Protected: No 00:15:21.076 Number of LBA Formats: 1 00:15:21.076 Current LBA Format: LBA Format #00 00:15:21.076 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:21.076 00:15:21.076 01:50:06 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:21.076 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.365 Initializing NVMe Controllers 00:15:26.365 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:26.365 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:26.365 Initialization complete. Launching workers. 00:15:26.365 ======================================================== 00:15:26.365 Latency(us) 00:15:26.365 Device Information : IOPS MiB/s Average min max 00:15:26.365 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 35664.34 139.31 3588.02 1160.59 7639.36 00:15:26.365 ======================================================== 00:15:26.365 Total : 35664.34 139.31 3588.02 1160.59 7639.36 00:15:26.365 00:15:26.365 01:50:11 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:26.365 EAL: No free 2048 kB hugepages reported on node 1 00:15:31.627 Initializing NVMe Controllers 00:15:31.627 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:31.627 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:31.627 Initialization complete. Launching workers. 00:15:31.628 ======================================================== 00:15:31.628 Latency(us) 00:15:31.628 Device Information : IOPS MiB/s Average min max 00:15:31.628 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16025.60 62.60 7997.01 6000.01 15983.58 00:15:31.628 ======================================================== 00:15:31.628 Total : 16025.60 62.60 7997.01 6000.01 15983.58 00:15:31.628 00:15:31.628 01:50:17 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:31.628 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.891 Initializing NVMe Controllers 00:15:36.891 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.891 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:36.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:36.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:36.891 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:36.891 Initialization complete. Launching workers. 00:15:36.891 Starting thread on core 2 00:15:36.891 Starting thread on core 3 00:15:36.891 Starting thread on core 1 00:15:36.891 01:50:22 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:36.891 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.172 Initializing NVMe Controllers 00:15:40.172 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.172 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.172 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:40.172 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:40.172 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:40.172 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:40.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:40.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:40.172 Initialization complete. Launching workers. 00:15:40.172 Starting thread on core 1 with urgent priority queue 00:15:40.172 Starting thread on core 2 with urgent priority queue 00:15:40.172 Starting thread on core 3 with urgent priority queue 00:15:40.172 Starting thread on core 0 with urgent priority queue 00:15:40.172 SPDK bdev Controller (SPDK1 ) core 0: 5729.33 IO/s 17.45 secs/100000 ios 00:15:40.172 SPDK bdev Controller (SPDK1 ) core 1: 5743.33 IO/s 17.41 secs/100000 ios 00:15:40.172 SPDK bdev Controller (SPDK1 ) core 2: 5914.33 IO/s 16.91 secs/100000 ios 00:15:40.173 SPDK bdev Controller (SPDK1 ) core 3: 6011.00 IO/s 16.64 secs/100000 ios 00:15:40.173 ======================================================== 00:15:40.173 00:15:40.173 01:50:25 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.173 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.738 Initializing NVMe Controllers 00:15:40.738 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.738 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:40.738 Namespace ID: 1 size: 0GB 00:15:40.738 Initialization complete. 00:15:40.738 INFO: using host memory buffer for IO 00:15:40.738 Hello world! 00:15:40.738 01:50:26 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:40.738 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.111 Initializing NVMe Controllers 00:15:42.111 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.111 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.111 Initialization complete. Launching workers. 00:15:42.111 submit (in ns) avg, min, max = 6991.6, 3454.4, 4017082.2 00:15:42.111 complete (in ns) avg, min, max = 25559.8, 2082.2, 4016823.3 00:15:42.111 00:15:42.111 Submit histogram 00:15:42.111 ================ 00:15:42.112 Range in us Cumulative Count 00:15:42.112 3.437 - 3.461: 0.0213% ( 3) 00:15:42.112 3.461 - 3.484: 0.3549% ( 47) 00:15:42.112 3.484 - 3.508: 1.5049% ( 162) 00:15:42.112 3.508 - 3.532: 4.0037% ( 352) 00:15:42.112 3.532 - 3.556: 8.7315% ( 666) 00:15:42.112 3.556 - 3.579: 17.9172% ( 1294) 00:15:42.112 3.579 - 3.603: 26.4925% ( 1208) 00:15:42.112 3.603 - 3.627: 35.2453% ( 1233) 00:15:42.112 3.627 - 3.650: 42.8764% ( 1075) 00:15:42.112 3.650 - 3.674: 50.9193% ( 1133) 00:15:42.112 3.674 - 3.698: 57.9612% ( 992) 00:15:42.112 3.698 - 3.721: 62.9517% ( 703) 00:15:42.112 3.721 - 3.745: 66.3520% ( 479) 00:15:42.112 3.745 - 3.769: 69.4896% ( 442) 00:15:42.112 3.769 - 3.793: 73.0390% ( 500) 00:15:42.112 3.793 - 3.816: 76.3967% ( 473) 00:15:42.112 3.816 - 3.840: 79.6621% ( 460) 00:15:42.112 3.840 - 3.864: 83.1121% ( 486) 00:15:42.112 3.864 - 3.887: 85.9303% ( 397) 00:15:42.112 3.887 - 3.911: 88.2942% ( 333) 00:15:42.112 3.911 - 3.935: 90.1115% ( 256) 00:15:42.112 3.935 - 3.959: 91.7442% ( 230) 00:15:42.112 3.959 - 3.982: 93.0361% ( 182) 00:15:42.112 3.982 - 4.006: 94.1009% ( 150) 00:15:42.112 4.006 - 4.030: 94.9457% ( 119) 00:15:42.112 4.030 - 4.053: 95.6343% ( 97) 00:15:42.112 4.053 - 4.077: 96.1809% ( 77) 00:15:42.112 4.077 - 4.101: 96.4861% ( 43) 00:15:42.112 4.101 - 4.124: 96.7843% ( 42) 00:15:42.112 4.124 - 4.148: 96.9972% ( 30) 00:15:42.112 4.148 - 4.172: 97.0966% ( 14) 00:15:42.112 4.172 - 4.196: 97.2031% ( 15) 00:15:42.112 4.196 - 4.219: 97.2812% ( 11) 00:15:42.112 4.219 - 4.243: 97.3593% ( 11) 00:15:42.112 4.243 - 4.267: 97.4303% ( 10) 00:15:42.112 4.267 - 4.290: 97.4799% ( 7) 00:15:42.112 4.290 - 4.314: 97.5225% ( 6) 00:15:42.112 4.314 - 4.338: 97.5722% ( 7) 00:15:42.112 4.338 - 4.361: 97.6006% ( 4) 00:15:42.112 4.361 - 4.385: 97.6290% ( 4) 00:15:42.112 4.385 - 4.409: 97.6361% ( 1) 00:15:42.112 4.409 - 4.433: 97.6432% ( 1) 00:15:42.112 4.433 - 4.456: 97.6574% ( 2) 00:15:42.112 4.456 - 4.480: 97.6645% ( 1) 00:15:42.112 4.480 - 4.504: 97.6716% ( 1) 00:15:42.112 4.504 - 4.527: 97.6787% ( 1) 00:15:42.112 4.551 - 4.575: 97.7000% ( 3) 00:15:42.112 4.575 - 4.599: 97.7142% ( 2) 00:15:42.112 4.599 - 4.622: 97.7284% ( 2) 00:15:42.112 4.622 - 4.646: 97.7497% ( 3) 00:15:42.112 4.646 - 4.670: 97.7639% ( 2) 00:15:42.112 4.670 - 4.693: 97.7923% ( 4) 00:15:42.112 4.693 - 4.717: 97.8349% ( 6) 00:15:42.112 4.717 - 4.741: 97.8846% ( 7) 00:15:42.112 4.741 - 4.764: 97.9059% ( 3) 00:15:42.112 4.764 - 4.788: 97.9556% ( 7) 00:15:42.112 4.788 - 4.812: 98.0124% ( 8) 00:15:42.112 4.812 - 4.836: 98.0407% ( 4) 00:15:42.112 4.836 - 4.859: 98.0762% ( 5) 00:15:42.112 4.859 - 4.883: 98.1401% ( 9) 00:15:42.112 4.883 - 4.907: 98.1685% ( 4) 00:15:42.112 4.907 - 4.930: 98.1827% ( 2) 00:15:42.112 4.930 - 4.954: 98.2253% ( 6) 00:15:42.112 4.954 - 4.978: 98.2537% ( 4) 00:15:42.112 4.978 - 5.001: 98.2679% ( 2) 00:15:42.112 5.001 - 5.025: 98.2750% ( 1) 00:15:42.112 5.025 - 5.049: 98.2963% ( 3) 00:15:42.112 5.049 - 5.073: 98.3105% ( 2) 00:15:42.112 5.073 - 5.096: 98.3247% ( 2) 00:15:42.112 5.096 - 5.120: 98.3318% ( 1) 00:15:42.112 5.120 - 5.144: 98.3673% ( 5) 00:15:42.112 5.144 - 5.167: 98.3886% ( 3) 00:15:42.112 5.167 - 5.191: 98.4028% ( 2) 00:15:42.112 5.191 - 5.215: 98.4099% ( 1) 00:15:42.112 5.215 - 5.239: 98.4170% ( 1) 00:15:42.112 5.239 - 5.262: 98.4383% ( 3) 00:15:42.112 5.262 - 5.286: 98.4454% ( 1) 00:15:42.112 5.310 - 5.333: 98.4525% ( 1) 00:15:42.112 5.333 - 5.357: 98.4596% ( 1) 00:15:42.112 5.357 - 5.381: 98.4667% ( 1) 00:15:42.112 5.594 - 5.618: 98.4738% ( 1) 00:15:42.112 5.689 - 5.713: 98.4809% ( 1) 00:15:42.112 5.713 - 5.736: 98.4880% ( 1) 00:15:42.112 5.807 - 5.831: 98.4951% ( 1) 00:15:42.112 5.831 - 5.855: 98.5022% ( 1) 00:15:42.112 5.855 - 5.879: 98.5164% ( 2) 00:15:42.112 5.902 - 5.926: 98.5235% ( 1) 00:15:42.112 5.926 - 5.950: 98.5306% ( 1) 00:15:42.112 5.997 - 6.021: 98.5377% ( 1) 00:15:42.112 6.044 - 6.068: 98.5448% ( 1) 00:15:42.112 6.447 - 6.495: 98.5519% ( 1) 00:15:42.112 6.495 - 6.542: 98.5590% ( 1) 00:15:42.112 6.590 - 6.637: 98.5661% ( 1) 00:15:42.112 6.684 - 6.732: 98.5732% ( 1) 00:15:42.112 6.732 - 6.779: 98.5803% ( 1) 00:15:42.112 6.827 - 6.874: 98.5874% ( 1) 00:15:42.112 6.874 - 6.921: 98.6086% ( 3) 00:15:42.112 6.921 - 6.969: 98.6228% ( 2) 00:15:42.112 6.969 - 7.016: 98.6370% ( 2) 00:15:42.112 7.253 - 7.301: 98.6512% ( 2) 00:15:42.112 7.490 - 7.538: 98.6583% ( 1) 00:15:42.112 7.585 - 7.633: 98.6654% ( 1) 00:15:42.112 7.680 - 7.727: 98.6938% ( 4) 00:15:42.112 7.727 - 7.775: 98.7222% ( 4) 00:15:42.112 7.775 - 7.822: 98.7293% ( 1) 00:15:42.112 7.822 - 7.870: 98.7577% ( 4) 00:15:42.112 7.870 - 7.917: 98.7719% ( 2) 00:15:42.112 8.059 - 8.107: 98.7790% ( 1) 00:15:42.112 8.107 - 8.154: 98.7861% ( 1) 00:15:42.112 8.249 - 8.296: 98.8003% ( 2) 00:15:42.112 8.296 - 8.344: 98.8216% ( 3) 00:15:42.112 8.533 - 8.581: 98.8287% ( 1) 00:15:42.112 8.581 - 8.628: 98.8358% ( 1) 00:15:42.112 8.676 - 8.723: 98.8429% ( 1) 00:15:42.112 8.865 - 8.913: 98.8500% ( 1) 00:15:42.112 9.007 - 9.055: 98.8642% ( 2) 00:15:42.112 9.055 - 9.102: 98.8713% ( 1) 00:15:42.112 9.102 - 9.150: 98.8784% ( 1) 00:15:42.112 9.339 - 9.387: 98.8855% ( 1) 00:15:42.112 9.434 - 9.481: 98.8926% ( 1) 00:15:42.112 9.576 - 9.624: 98.8997% ( 1) 00:15:42.112 9.719 - 9.766: 98.9139% ( 2) 00:15:42.112 9.956 - 10.003: 98.9210% ( 1) 00:15:42.112 10.098 - 10.145: 98.9423% ( 3) 00:15:42.112 10.145 - 10.193: 98.9494% ( 1) 00:15:42.112 10.193 - 10.240: 98.9636% ( 2) 00:15:42.112 10.477 - 10.524: 98.9707% ( 1) 00:15:42.112 10.619 - 10.667: 98.9778% ( 1) 00:15:42.112 10.714 - 10.761: 98.9849% ( 1) 00:15:42.112 11.567 - 11.615: 98.9920% ( 1) 00:15:42.112 11.757 - 11.804: 99.0062% ( 2) 00:15:42.112 11.899 - 11.947: 99.0133% ( 1) 00:15:42.112 12.326 - 12.421: 99.0204% ( 1) 00:15:42.112 12.516 - 12.610: 99.0275% ( 1) 00:15:42.112 12.800 - 12.895: 99.0346% ( 1) 00:15:42.112 12.895 - 12.990: 99.0417% ( 1) 00:15:42.112 13.274 - 13.369: 99.0559% ( 2) 00:15:42.112 13.369 - 13.464: 99.0630% ( 1) 00:15:42.112 13.464 - 13.559: 99.0701% ( 1) 00:15:42.112 13.653 - 13.748: 99.0772% ( 1) 00:15:42.112 13.938 - 14.033: 99.0843% ( 1) 00:15:42.112 14.033 - 14.127: 99.0914% ( 1) 00:15:42.112 14.127 - 14.222: 99.0985% ( 1) 00:15:42.112 14.222 - 14.317: 99.1056% ( 1) 00:15:42.112 14.317 - 14.412: 99.1198% ( 2) 00:15:42.112 14.412 - 14.507: 99.1269% ( 1) 00:15:42.112 14.507 - 14.601: 99.1340% ( 1) 00:15:42.112 14.601 - 14.696: 99.1411% ( 1) 00:15:42.112 14.696 - 14.791: 99.1482% ( 1) 00:15:42.112 14.981 - 15.076: 99.1552% ( 1) 00:15:42.112 15.360 - 15.455: 99.1623% ( 1) 00:15:42.112 16.877 - 16.972: 99.1694% ( 1) 00:15:42.112 17.161 - 17.256: 99.1907% ( 3) 00:15:42.112 17.256 - 17.351: 99.2191% ( 4) 00:15:42.112 17.351 - 17.446: 99.2546% ( 5) 00:15:42.112 17.446 - 17.541: 99.2688% ( 2) 00:15:42.112 17.541 - 17.636: 99.3398% ( 10) 00:15:42.112 17.636 - 17.730: 99.3682% ( 4) 00:15:42.112 17.730 - 17.825: 99.4037% ( 5) 00:15:42.112 17.825 - 17.920: 99.4108% ( 1) 00:15:42.112 17.920 - 18.015: 99.4463% ( 5) 00:15:42.112 18.015 - 18.110: 99.5102% ( 9) 00:15:42.112 18.110 - 18.204: 99.5457% ( 5) 00:15:42.112 18.204 - 18.299: 99.5670% ( 3) 00:15:42.112 18.299 - 18.394: 99.6380% ( 10) 00:15:42.112 18.394 - 18.489: 99.6735% ( 5) 00:15:42.112 18.489 - 18.584: 99.6948% ( 3) 00:15:42.112 18.584 - 18.679: 99.7586% ( 9) 00:15:42.112 18.679 - 18.773: 99.8012% ( 6) 00:15:42.112 18.773 - 18.868: 99.8083% ( 1) 00:15:42.112 18.868 - 18.963: 99.8225% ( 2) 00:15:42.112 19.342 - 19.437: 99.8367% ( 2) 00:15:42.112 19.437 - 19.532: 99.8509% ( 2) 00:15:42.112 19.627 - 19.721: 99.8580% ( 1) 00:15:42.112 19.721 - 19.816: 99.8722% ( 2) 00:15:42.112 20.006 - 20.101: 99.8793% ( 1) 00:15:42.112 23.230 - 23.324: 99.8864% ( 1) 00:15:42.112 23.419 - 23.514: 99.8935% ( 1) 00:15:42.112 23.514 - 23.609: 99.9006% ( 1) 00:15:42.112 24.841 - 25.031: 99.9077% ( 1) 00:15:42.112 25.979 - 26.169: 99.9148% ( 1) 00:15:42.112 34.323 - 34.513: 99.9219% ( 1) 00:15:42.112 3980.705 - 4004.978: 99.9716% ( 7) 00:15:42.112 4004.978 - 4029.250: 100.0000% ( 4) 00:15:42.112 00:15:42.112 Complete histogram 00:15:42.112 ================== 00:15:42.112 Range in us Cumulative Count 00:15:42.112 2.074 - 2.086: 0.0923% ( 13) 00:15:42.112 2.086 - 2.098: 9.8531% ( 1375) 00:15:42.112 2.098 - 2.110: 32.1289% ( 3138) 00:15:42.113 2.110 - 2.121: 36.1255% ( 563) 00:15:42.113 2.121 - 2.133: 48.4134% ( 1731) 00:15:42.113 2.133 - 2.145: 61.1557% ( 1795) 00:15:42.113 2.145 - 2.157: 64.1442% ( 421) 00:15:42.113 2.157 - 2.169: 71.6618% ( 1059) 00:15:42.113 2.169 - 2.181: 78.6044% ( 978) 00:15:42.113 2.181 - 2.193: 80.2016% ( 225) 00:15:42.113 2.193 - 2.204: 85.0004% ( 676) 00:15:42.113 2.204 - 2.216: 88.5426% ( 499) 00:15:42.113 2.216 - 2.228: 89.6216% ( 152) 00:15:42.113 2.228 - 2.240: 91.7158% ( 295) 00:15:42.113 2.240 - 2.252: 93.9235% ( 311) 00:15:42.113 2.252 - 2.264: 94.4417% ( 73) 00:15:42.113 2.264 - 2.276: 94.9954% ( 78) 00:15:42.113 2.276 - 2.287: 95.4994% ( 71) 00:15:42.113 2.287 - 2.299: 95.7053% ( 29) 00:15:42.113 2.299 - 2.311: 95.9679% ( 37) 00:15:42.113 2.311 - 2.323: 96.2377% ( 38) 00:15:42.113 2.323 - 2.335: 96.3299% ( 13) 00:15:42.113 2.335 - 2.347: 96.3867% ( 8) 00:15:42.113 2.347 - 2.359: 96.4151% ( 4) 00:15:42.113 2.359 - 2.370: 96.5571% ( 20) 00:15:42.113 2.370 - 2.382: 96.6210% ( 9) 00:15:42.113 2.382 - 2.394: 96.8269% ( 29) 00:15:42.113 2.394 - 2.406: 97.1605% ( 47) 00:15:42.113 2.406 - 2.418: 97.3593% ( 28) 00:15:42.113 2.418 - 2.430: 97.5509% ( 27) 00:15:42.113 2.430 - 2.441: 97.6787% ( 18) 00:15:42.113 2.441 - 2.453: 97.8633% ( 26) 00:15:42.113 2.453 - 2.465: 98.0265% ( 23) 00:15:42.113 2.465 - 2.477: 98.1046% ( 11) 00:15:42.113 2.477 - 2.489: 98.1756% ( 10) 00:15:42.113 2.489 - 2.501: 98.2466% ( 10) 00:15:42.113 2.501 - 2.513: 98.2821% ( 5) 00:15:42.113 2.513 - 2.524: 98.3105% ( 4) 00:15:42.113 2.524 - 2.536: 98.3247% ( 2) 00:15:42.113 2.536 - 2.548: 98.3389% ( 2) 00:15:42.113 2.548 - 2.560: 98.3460% ( 1) 00:15:42.113 2.560 - 2.572: 98.3531% ( 1) 00:15:42.113 2.572 - 2.584: 98.3602% ( 1) 00:15:42.113 2.584 - 2.596: 98.3673% ( 1) 00:15:42.113 2.596 - 2.607: 98.3744% ( 1) 00:15:42.113 2.607 - 2.619: 98.4028% ( 4) 00:15:42.113 2.619 - 2.631: 98.4099% ( 1) 00:15:42.113 2.655 - 2.667: 98.4170% ( 1) 00:15:42.113 2.667 - 2.679: 98.4241% ( 1) 00:15:42.113 2.690 - 2.702: 98.4312% ( 1) 00:15:42.113 2.726 - 2.738: 98.4383% ( 1) 00:15:42.113 2.738 - 2.750: 98.4525% ( 2) 00:15:42.113 2.761 - 2.773: 98.4596% ( 1) 00:15:42.113 2.809 - 2.821: 98.4667% ( 1) 00:15:42.113 2.821 - 2.833: 98.4738% ( 1) 00:15:42.113 3.319 - 3.342: 98.4809% ( 1) 00:15:42.113 3.366 - 3.390: 98.4951% ( 2) 00:15:42.113 3.390 - 3.413: 98.5093% ( 2) 00:15:42.113 3.413 - 3.437: 98.5164% ( 1) 00:15:42.113 3.437 - 3.461: 98.5235% ( 1) 00:15:42.113 3.484 - 3.508: 98.5519% ( 4) 00:15:42.113 3.508 - 3.532: 98.5590% ( 1) 00:15:42.113 3.579 - 3.603: 98.5661% ( 1) 00:15:42.113 3.603 - 3.627: 98.5732% ( 1) 00:15:42.113 3.627 - 3.650: 98.5803% ( 1) 00:15:42.113 3.698 - 3.721: 98.5874% ( 1) 00:15:42.113 3.769 - 3.793: 98.5944% ( 1) 00:15:42.113 3.793 - 3.816: 98.6015% ( 1) 00:15:42.113 3.840 - 3.864: 98.6086% ( 1) 00:15:42.113 3.959 - 3.982: 98.6157% ( 1) 00:15:42.113 4.006 - 4.030: 98.6228% ( 1) 00:15:42.113 5.428 - 5.452: 98.6299% ( 1) 00:15:42.113 5.523 - 5.547: 98.6370% ( 1) 00:15:42.113 5.997 - 6.021: 98.6441% ( 1) 00:15:42.113 6.116 - 6.163: 98.6725% ( 4) 00:15:42.113 6.210 - 6.258: 98.6796% ( 1) 00:15:42.113 6.258 - 6.305: 98.7009% ( 3) 00:15:42.113 6.353 - 6.400: 98.7151% ( 2) 00:15:42.113 6.400 - 6.447: 98.7222% ( 1) 00:15:42.113 6.495 - 6.542: 98.7293% ( 1) 00:15:42.113 6.542 - 6.590: 98.7364% ( 1) 00:15:42.113 6.684 - 6.732: 98.7435% ( 1) 00:15:42.113 6.921 - 6.969: 98.7506% ( 1) 00:15:42.113 7.016 - 7.064: 98.7577% ( 1) 00:15:42.113 7.206 - 7.253: 98.7648% ( 1) 00:15:42.113 7.301 - 7.348: 98.7719% ( 1) 00:15:42.113 7.538 - 7.585: 98.7790% ( 1) 00:15:42.113 7.870 - 7.917: 98.7861% ( 1) 00:15:42.113 8.107 - 8.154: 98.7932% ( 1) 00:15:42.113 8.960 - 9.007: 98.8003% ( 1) 00:15:42.113 9.671 - 9.719: 98.8074% ( 1) 00:15:42.113 12.895 - 12.990: 98.8145% ( 1) 00:15:42.113 15.455 - 15.550: 98.8287% ( 2) 00:15:42.113 15.550 - 15.644: 98.8358% ( 1) 00:15:42.113 15.644 - 15.739: 98.8429% ( 1) 00:15:42.113 15.739 - 15.834: 98.8855% ( 6) 00:15:42.113 15.834 - 15.929: 98.9068% ( 3) 00:15:42.113 15.929 - 16.024: 98.9210% ( 2) 00:15:42.113 16.024 - 16.119: 98.9636% ( 6) 00:15:42.113 16.119 - 16.213: 98.9991% ( 5) 00:15:42.113 16.213 - 16.308: 99.0204% ( 3) 00:15:42.113 16.308 - 16.403: 99.0559% ( 5) 00:15:42.113 16.403 - 16.498: 99.0914% ( 5) 00:15:42.113 16.498 - 16.593: 99.1127% ( 3) 00:15:42.113 16.593 - 16.687: 99.1482% ( 5) 00:15:42.113 16.687 - 16.782: 99.1978% ( 7) 00:15:42.113 16.782 - 16.877: 99.2120% ( 2) 00:15:42.113 16.877 - 16.972: 99.2546% ( 6) 00:15:42.113 16.972 - 17.067: 99.2688% ( 2) 00:15:42.113 17.067 - 17.161: 99.2972% ( 4) 00:15:42.113 17.161 - 17.256: 99.3043% ( 1) 00:15:42.113 17.256 - 17.351: 99.3185% ( 2) 00:15:42.113 17.446 - 17.541: 99.3398% ( 3) 00:15:42.113 17.541 - 17.636: 99.3469% ( 1) 00:15:42.113 17.730 - 17.825: 99.3540% ( 1) 00:15:42.113 17.825 - 17.920: 99.3611% ( 1) 00:15:42.113 18.204 - 18.299: 99.3682% ( 1) 00:15:42.113 18.299 - 18.394: 99.3753% ( 1) 00:15:42.113 18.394 - 18.489: 99.3824% ( 1) 00:15:42.113 18.679 - 18.773: 99.3895% ( 1) 00:15:42.113 20.480 - 20.575: 99.3966% ( 1) 00:15:42.113 25.221 - 25.410: 99.4037% ( 1) 00:15:42.113 35.461 - 35.650: 99.4108% ( 1) 00:15:42.113 153.221 - 153.979: 99.4179% ( 1) 00:15:42.113 3980.705 - 4004.978: 99.7870% ( 52) 00:15:42.113 4004.978 - 4029.250: 100.0000% ( 30) 00:15:42.113 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.113 [2024-04-15 01:50:27.625195] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:15:42.113 [ 00:15:42.113 { 00:15:42.113 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.113 "subtype": "Discovery", 00:15:42.113 "listen_addresses": [], 00:15:42.113 "allow_any_host": true, 00:15:42.113 "hosts": [] 00:15:42.113 }, 00:15:42.113 { 00:15:42.113 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.113 "subtype": "NVMe", 00:15:42.113 "listen_addresses": [ 00:15:42.113 { 00:15:42.113 "transport": "VFIOUSER", 00:15:42.113 "trtype": "VFIOUSER", 00:15:42.113 "adrfam": "IPv4", 00:15:42.113 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.113 "trsvcid": "0" 00:15:42.113 } 00:15:42.113 ], 00:15:42.113 "allow_any_host": true, 00:15:42.113 "hosts": [], 00:15:42.113 "serial_number": "SPDK1", 00:15:42.113 "model_number": "SPDK bdev Controller", 00:15:42.113 "max_namespaces": 32, 00:15:42.113 "min_cntlid": 1, 00:15:42.113 "max_cntlid": 65519, 00:15:42.113 "namespaces": [ 00:15:42.113 { 00:15:42.113 "nsid": 1, 00:15:42.113 "bdev_name": "Malloc1", 00:15:42.113 "name": "Malloc1", 00:15:42.113 "nguid": "78BDAADFE3B344F0B23DF48428C5B6A8", 00:15:42.113 "uuid": "78bdaadf-e3b3-44f0-b23d-f48428c5b6a8" 00:15:42.113 } 00:15:42.113 ] 00:15:42.113 }, 00:15:42.113 { 00:15:42.113 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.113 "subtype": "NVMe", 00:15:42.113 "listen_addresses": [ 00:15:42.113 { 00:15:42.113 "transport": "VFIOUSER", 00:15:42.113 "trtype": "VFIOUSER", 00:15:42.113 "adrfam": "IPv4", 00:15:42.113 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.113 "trsvcid": "0" 00:15:42.113 } 00:15:42.113 ], 00:15:42.113 "allow_any_host": true, 00:15:42.113 "hosts": [], 00:15:42.113 "serial_number": "SPDK2", 00:15:42.113 "model_number": "SPDK bdev Controller", 00:15:42.113 "max_namespaces": 32, 00:15:42.113 "min_cntlid": 1, 00:15:42.113 "max_cntlid": 65519, 00:15:42.113 "namespaces": [ 00:15:42.113 { 00:15:42.113 "nsid": 1, 00:15:42.113 "bdev_name": "Malloc2", 00:15:42.113 "name": "Malloc2", 00:15:42.113 "nguid": "3DF26DAF317D4C54AD3FF055E79669E2", 00:15:42.113 "uuid": "3df26daf-317d-4c54-ad3f-f055e79669e2" 00:15:42.113 } 00:15:42.113 ] 00:15:42.113 } 00:15:42.113 ] 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:42.113 01:50:27 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2134270 00:15:42.114 01:50:27 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:42.114 01:50:27 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:42.114 01:50:27 -- common/autotest_common.sh@1244 -- # local i=0 00:15:42.114 01:50:27 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.114 01:50:27 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:42.114 01:50:27 -- common/autotest_common.sh@1255 -- # return 0 00:15:42.114 01:50:27 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:42.114 01:50:27 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:42.114 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.372 Malloc3 00:15:42.372 01:50:27 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:42.630 01:50:28 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:42.630 Asynchronous Event Request test 00:15:42.630 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.630 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:42.630 Registering asynchronous event callbacks... 00:15:42.630 Starting namespace attribute notice tests for all controllers... 00:15:42.630 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:42.630 aer_cb - Changed Namespace 00:15:42.630 Cleaning up... 00:15:42.889 [ 00:15:42.889 { 00:15:42.889 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:42.889 "subtype": "Discovery", 00:15:42.889 "listen_addresses": [], 00:15:42.889 "allow_any_host": true, 00:15:42.889 "hosts": [] 00:15:42.889 }, 00:15:42.889 { 00:15:42.889 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:42.889 "subtype": "NVMe", 00:15:42.889 "listen_addresses": [ 00:15:42.889 { 00:15:42.889 "transport": "VFIOUSER", 00:15:42.889 "trtype": "VFIOUSER", 00:15:42.889 "adrfam": "IPv4", 00:15:42.889 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:42.889 "trsvcid": "0" 00:15:42.889 } 00:15:42.889 ], 00:15:42.889 "allow_any_host": true, 00:15:42.889 "hosts": [], 00:15:42.889 "serial_number": "SPDK1", 00:15:42.889 "model_number": "SPDK bdev Controller", 00:15:42.889 "max_namespaces": 32, 00:15:42.889 "min_cntlid": 1, 00:15:42.889 "max_cntlid": 65519, 00:15:42.889 "namespaces": [ 00:15:42.889 { 00:15:42.889 "nsid": 1, 00:15:42.889 "bdev_name": "Malloc1", 00:15:42.889 "name": "Malloc1", 00:15:42.889 "nguid": "78BDAADFE3B344F0B23DF48428C5B6A8", 00:15:42.889 "uuid": "78bdaadf-e3b3-44f0-b23d-f48428c5b6a8" 00:15:42.889 }, 00:15:42.889 { 00:15:42.889 "nsid": 2, 00:15:42.889 "bdev_name": "Malloc3", 00:15:42.889 "name": "Malloc3", 00:15:42.889 "nguid": "775C886E476E4B458DBEEF581B3E3D82", 00:15:42.889 "uuid": "775c886e-476e-4b45-8dbe-ef581b3e3d82" 00:15:42.889 } 00:15:42.889 ] 00:15:42.889 }, 00:15:42.889 { 00:15:42.889 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:42.889 "subtype": "NVMe", 00:15:42.889 "listen_addresses": [ 00:15:42.889 { 00:15:42.889 "transport": "VFIOUSER", 00:15:42.889 "trtype": "VFIOUSER", 00:15:42.889 "adrfam": "IPv4", 00:15:42.889 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:42.889 "trsvcid": "0" 00:15:42.889 } 00:15:42.889 ], 00:15:42.889 "allow_any_host": true, 00:15:42.889 "hosts": [], 00:15:42.889 "serial_number": "SPDK2", 00:15:42.889 "model_number": "SPDK bdev Controller", 00:15:42.889 "max_namespaces": 32, 00:15:42.889 "min_cntlid": 1, 00:15:42.889 "max_cntlid": 65519, 00:15:42.889 "namespaces": [ 00:15:42.889 { 00:15:42.889 "nsid": 1, 00:15:42.889 "bdev_name": "Malloc2", 00:15:42.889 "name": "Malloc2", 00:15:42.889 "nguid": "3DF26DAF317D4C54AD3FF055E79669E2", 00:15:42.889 "uuid": "3df26daf-317d-4c54-ad3f-f055e79669e2" 00:15:42.889 } 00:15:42.889 ] 00:15:42.889 } 00:15:42.889 ] 00:15:42.889 01:50:28 -- target/nvmf_vfio_user.sh@44 -- # wait 2134270 00:15:42.889 01:50:28 -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:42.889 01:50:28 -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:42.889 01:50:28 -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:42.889 01:50:28 -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:42.889 [2024-04-15 01:50:28.423947] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:15:42.889 [2024-04-15 01:50:28.423991] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2134408 ] 00:15:42.889 EAL: No free 2048 kB hugepages reported on node 1 00:15:42.889 [2024-04-15 01:50:28.456039] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:42.889 [2024-04-15 01:50:28.461339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.889 [2024-04-15 01:50:28.461375] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7841895000 00:15:42.889 [2024-04-15 01:50:28.462364] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.463362] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.464371] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.465367] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.466388] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.467382] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.468404] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.469420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:42.889 [2024-04-15 01:50:28.470420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:42.889 [2024-04-15 01:50:28.470442] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7840649000 00:15:42.889 [2024-04-15 01:50:28.471594] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:42.889 [2024-04-15 01:50:28.486365] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:42.889 [2024-04-15 01:50:28.486397] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:42.889 [2024-04-15 01:50:28.488487] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:42.889 [2024-04-15 01:50:28.488536] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:42.889 [2024-04-15 01:50:28.488617] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:42.889 [2024-04-15 01:50:28.488639] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:42.889 [2024-04-15 01:50:28.488650] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:42.889 [2024-04-15 01:50:28.489494] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:42.889 [2024-04-15 01:50:28.489518] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:42.889 [2024-04-15 01:50:28.489532] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:42.889 [2024-04-15 01:50:28.490500] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:42.889 [2024-04-15 01:50:28.490526] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:42.889 [2024-04-15 01:50:28.490541] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:42.889 [2024-04-15 01:50:28.491504] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:42.889 [2024-04-15 01:50:28.491524] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:42.889 [2024-04-15 01:50:28.492513] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:42.889 [2024-04-15 01:50:28.492532] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:42.889 [2024-04-15 01:50:28.492542] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:42.889 [2024-04-15 01:50:28.492553] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:42.889 [2024-04-15 01:50:28.492663] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:42.890 [2024-04-15 01:50:28.492671] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:42.890 [2024-04-15 01:50:28.492679] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:42.890 [2024-04-15 01:50:28.497071] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:42.890 [2024-04-15 01:50:28.497540] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:42.890 [2024-04-15 01:50:28.498547] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:42.890 [2024-04-15 01:50:28.499577] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:42.890 [2024-04-15 01:50:28.500554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:42.890 [2024-04-15 01:50:28.500573] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:42.890 [2024-04-15 01:50:28.500582] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.500605] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:42.890 [2024-04-15 01:50:28.500618] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.500637] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:42.890 [2024-04-15 01:50:28.500646] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:42.890 [2024-04-15 01:50:28.500664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:42.890 [2024-04-15 01:50:28.507062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:42.890 [2024-04-15 01:50:28.507085] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:42.890 [2024-04-15 01:50:28.507099] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:42.890 [2024-04-15 01:50:28.507108] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:42.890 [2024-04-15 01:50:28.507120] nvme_ctrlr.c:1989:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:42.890 [2024-04-15 01:50:28.507128] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:42.890 [2024-04-15 01:50:28.507136] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:42.890 [2024-04-15 01:50:28.507145] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.507160] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.507177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:42.890 [2024-04-15 01:50:28.515055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:42.890 [2024-04-15 01:50:28.515079] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.890 [2024-04-15 01:50:28.515106] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.890 [2024-04-15 01:50:28.515119] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.890 [2024-04-15 01:50:28.515132] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:42.890 [2024-04-15 01:50:28.515141] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.515158] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.515173] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:42.890 [2024-04-15 01:50:28.523059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:42.890 [2024-04-15 01:50:28.523076] nvme_ctrlr.c:2877:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:42.890 [2024-04-15 01:50:28.523085] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.523096] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.523111] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.523125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:42.890 [2024-04-15 01:50:28.531055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:42.890 [2024-04-15 01:50:28.531115] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.531131] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:42.890 [2024-04-15 01:50:28.531144] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:42.890 [2024-04-15 01:50:28.531152] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:42.890 [2024-04-15 01:50:28.531166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:43.149 [2024-04-15 01:50:28.539059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:43.149 [2024-04-15 01:50:28.539083] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:43.149 [2024-04-15 01:50:28.539104] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:43.149 [2024-04-15 01:50:28.539119] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:43.149 [2024-04-15 01:50:28.539132] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.149 [2024-04-15 01:50:28.539140] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.149 [2024-04-15 01:50:28.539150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.149 [2024-04-15 01:50:28.547074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:43.149 [2024-04-15 01:50:28.547102] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:43.149 [2024-04-15 01:50:28.547118] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.547131] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:43.150 [2024-04-15 01:50:28.547139] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.150 [2024-04-15 01:50:28.547149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.555072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.555093] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555107] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555122] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555132] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555141] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555150] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:43.150 [2024-04-15 01:50:28.555158] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:43.150 [2024-04-15 01:50:28.555166] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:43.150 [2024-04-15 01:50:28.555191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.563059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.563090] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.571055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.571081] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.579057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.579083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.587059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.587085] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:43.150 [2024-04-15 01:50:28.587096] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:43.150 [2024-04-15 01:50:28.587102] nvme_pcie_common.c:1235:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:43.150 [2024-04-15 01:50:28.587109] nvme_pcie_common.c:1251:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:43.150 [2024-04-15 01:50:28.587118] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:43.150 [2024-04-15 01:50:28.587131] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:43.150 [2024-04-15 01:50:28.587139] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:43.150 [2024-04-15 01:50:28.587149] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.587160] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:43.150 [2024-04-15 01:50:28.587169] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:43.150 [2024-04-15 01:50:28.587178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.587190] nvme_pcie_common.c:1198:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:43.150 [2024-04-15 01:50:28.587198] nvme_pcie_common.c:1226:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:43.150 [2024-04-15 01:50:28.587207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:43.150 [2024-04-15 01:50:28.595056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.595087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.595103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:43.150 [2024-04-15 01:50:28.595115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:43.150 ===================================================== 00:15:43.150 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:43.150 ===================================================== 00:15:43.150 Controller Capabilities/Features 00:15:43.150 ================================ 00:15:43.150 Vendor ID: 4e58 00:15:43.150 Subsystem Vendor ID: 4e58 00:15:43.150 Serial Number: SPDK2 00:15:43.150 Model Number: SPDK bdev Controller 00:15:43.150 Firmware Version: 24.01.1 00:15:43.150 Recommended Arb Burst: 6 00:15:43.150 IEEE OUI Identifier: 8d 6b 50 00:15:43.150 Multi-path I/O 00:15:43.150 May have multiple subsystem ports: Yes 00:15:43.150 May have multiple controllers: Yes 00:15:43.150 Associated with SR-IOV VF: No 00:15:43.150 Max Data Transfer Size: 131072 00:15:43.150 Max Number of Namespaces: 32 00:15:43.150 Max Number of I/O Queues: 127 00:15:43.150 NVMe Specification Version (VS): 1.3 00:15:43.150 NVMe Specification Version (Identify): 1.3 00:15:43.150 Maximum Queue Entries: 256 00:15:43.150 Contiguous Queues Required: Yes 00:15:43.150 Arbitration Mechanisms Supported 00:15:43.150 Weighted Round Robin: Not Supported 00:15:43.150 Vendor Specific: Not Supported 00:15:43.150 Reset Timeout: 15000 ms 00:15:43.150 Doorbell Stride: 4 bytes 00:15:43.150 NVM Subsystem Reset: Not Supported 00:15:43.150 Command Sets Supported 00:15:43.150 NVM Command Set: Supported 00:15:43.150 Boot Partition: Not Supported 00:15:43.150 Memory Page Size Minimum: 4096 bytes 00:15:43.150 Memory Page Size Maximum: 4096 bytes 00:15:43.150 Persistent Memory Region: Not Supported 00:15:43.150 Optional Asynchronous Events Supported 00:15:43.150 Namespace Attribute Notices: Supported 00:15:43.150 Firmware Activation Notices: Not Supported 00:15:43.150 ANA Change Notices: Not Supported 00:15:43.150 PLE Aggregate Log Change Notices: Not Supported 00:15:43.150 LBA Status Info Alert Notices: Not Supported 00:15:43.150 EGE Aggregate Log Change Notices: Not Supported 00:15:43.150 Normal NVM Subsystem Shutdown event: Not Supported 00:15:43.150 Zone Descriptor Change Notices: Not Supported 00:15:43.150 Discovery Log Change Notices: Not Supported 00:15:43.150 Controller Attributes 00:15:43.150 128-bit Host Identifier: Supported 00:15:43.150 Non-Operational Permissive Mode: Not Supported 00:15:43.150 NVM Sets: Not Supported 00:15:43.150 Read Recovery Levels: Not Supported 00:15:43.150 Endurance Groups: Not Supported 00:15:43.150 Predictable Latency Mode: Not Supported 00:15:43.150 Traffic Based Keep ALive: Not Supported 00:15:43.150 Namespace Granularity: Not Supported 00:15:43.150 SQ Associations: Not Supported 00:15:43.150 UUID List: Not Supported 00:15:43.150 Multi-Domain Subsystem: Not Supported 00:15:43.150 Fixed Capacity Management: Not Supported 00:15:43.150 Variable Capacity Management: Not Supported 00:15:43.150 Delete Endurance Group: Not Supported 00:15:43.150 Delete NVM Set: Not Supported 00:15:43.150 Extended LBA Formats Supported: Not Supported 00:15:43.150 Flexible Data Placement Supported: Not Supported 00:15:43.150 00:15:43.150 Controller Memory Buffer Support 00:15:43.150 ================================ 00:15:43.150 Supported: No 00:15:43.150 00:15:43.150 Persistent Memory Region Support 00:15:43.150 ================================ 00:15:43.150 Supported: No 00:15:43.150 00:15:43.150 Admin Command Set Attributes 00:15:43.150 ============================ 00:15:43.150 Security Send/Receive: Not Supported 00:15:43.150 Format NVM: Not Supported 00:15:43.150 Firmware Activate/Download: Not Supported 00:15:43.150 Namespace Management: Not Supported 00:15:43.150 Device Self-Test: Not Supported 00:15:43.150 Directives: Not Supported 00:15:43.150 NVMe-MI: Not Supported 00:15:43.150 Virtualization Management: Not Supported 00:15:43.150 Doorbell Buffer Config: Not Supported 00:15:43.150 Get LBA Status Capability: Not Supported 00:15:43.150 Command & Feature Lockdown Capability: Not Supported 00:15:43.150 Abort Command Limit: 4 00:15:43.150 Async Event Request Limit: 4 00:15:43.151 Number of Firmware Slots: N/A 00:15:43.151 Firmware Slot 1 Read-Only: N/A 00:15:43.151 Firmware Activation Without Reset: N/A 00:15:43.151 Multiple Update Detection Support: N/A 00:15:43.151 Firmware Update Granularity: No Information Provided 00:15:43.151 Per-Namespace SMART Log: No 00:15:43.151 Asymmetric Namespace Access Log Page: Not Supported 00:15:43.151 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:43.151 Command Effects Log Page: Supported 00:15:43.151 Get Log Page Extended Data: Supported 00:15:43.151 Telemetry Log Pages: Not Supported 00:15:43.151 Persistent Event Log Pages: Not Supported 00:15:43.151 Supported Log Pages Log Page: May Support 00:15:43.151 Commands Supported & Effects Log Page: Not Supported 00:15:43.151 Feature Identifiers & Effects Log Page:May Support 00:15:43.151 NVMe-MI Commands & Effects Log Page: May Support 00:15:43.151 Data Area 4 for Telemetry Log: Not Supported 00:15:43.151 Error Log Page Entries Supported: 128 00:15:43.151 Keep Alive: Supported 00:15:43.151 Keep Alive Granularity: 10000 ms 00:15:43.151 00:15:43.151 NVM Command Set Attributes 00:15:43.151 ========================== 00:15:43.151 Submission Queue Entry Size 00:15:43.151 Max: 64 00:15:43.151 Min: 64 00:15:43.151 Completion Queue Entry Size 00:15:43.151 Max: 16 00:15:43.151 Min: 16 00:15:43.151 Number of Namespaces: 32 00:15:43.151 Compare Command: Supported 00:15:43.151 Write Uncorrectable Command: Not Supported 00:15:43.151 Dataset Management Command: Supported 00:15:43.151 Write Zeroes Command: Supported 00:15:43.151 Set Features Save Field: Not Supported 00:15:43.151 Reservations: Not Supported 00:15:43.151 Timestamp: Not Supported 00:15:43.151 Copy: Supported 00:15:43.151 Volatile Write Cache: Present 00:15:43.151 Atomic Write Unit (Normal): 1 00:15:43.151 Atomic Write Unit (PFail): 1 00:15:43.151 Atomic Compare & Write Unit: 1 00:15:43.151 Fused Compare & Write: Supported 00:15:43.151 Scatter-Gather List 00:15:43.151 SGL Command Set: Supported (Dword aligned) 00:15:43.151 SGL Keyed: Not Supported 00:15:43.151 SGL Bit Bucket Descriptor: Not Supported 00:15:43.151 SGL Metadata Pointer: Not Supported 00:15:43.151 Oversized SGL: Not Supported 00:15:43.151 SGL Metadata Address: Not Supported 00:15:43.151 SGL Offset: Not Supported 00:15:43.151 Transport SGL Data Block: Not Supported 00:15:43.151 Replay Protected Memory Block: Not Supported 00:15:43.151 00:15:43.151 Firmware Slot Information 00:15:43.151 ========================= 00:15:43.151 Active slot: 1 00:15:43.151 Slot 1 Firmware Revision: 24.01.1 00:15:43.151 00:15:43.151 00:15:43.151 Commands Supported and Effects 00:15:43.151 ============================== 00:15:43.151 Admin Commands 00:15:43.151 -------------- 00:15:43.151 Get Log Page (02h): Supported 00:15:43.151 Identify (06h): Supported 00:15:43.151 Abort (08h): Supported 00:15:43.151 Set Features (09h): Supported 00:15:43.151 Get Features (0Ah): Supported 00:15:43.151 Asynchronous Event Request (0Ch): Supported 00:15:43.151 Keep Alive (18h): Supported 00:15:43.151 I/O Commands 00:15:43.151 ------------ 00:15:43.151 Flush (00h): Supported LBA-Change 00:15:43.151 Write (01h): Supported LBA-Change 00:15:43.151 Read (02h): Supported 00:15:43.151 Compare (05h): Supported 00:15:43.151 Write Zeroes (08h): Supported LBA-Change 00:15:43.151 Dataset Management (09h): Supported LBA-Change 00:15:43.151 Copy (19h): Supported LBA-Change 00:15:43.151 Unknown (79h): Supported LBA-Change 00:15:43.151 Unknown (7Ah): Supported 00:15:43.151 00:15:43.151 Error Log 00:15:43.151 ========= 00:15:43.151 00:15:43.151 Arbitration 00:15:43.151 =========== 00:15:43.151 Arbitration Burst: 1 00:15:43.151 00:15:43.151 Power Management 00:15:43.151 ================ 00:15:43.151 Number of Power States: 1 00:15:43.151 Current Power State: Power State #0 00:15:43.151 Power State #0: 00:15:43.151 Max Power: 0.00 W 00:15:43.151 Non-Operational State: Operational 00:15:43.151 Entry Latency: Not Reported 00:15:43.151 Exit Latency: Not Reported 00:15:43.151 Relative Read Throughput: 0 00:15:43.151 Relative Read Latency: 0 00:15:43.151 Relative Write Throughput: 0 00:15:43.151 Relative Write Latency: 0 00:15:43.151 Idle Power: Not Reported 00:15:43.151 Active Power: Not Reported 00:15:43.151 Non-Operational Permissive Mode: Not Supported 00:15:43.151 00:15:43.151 Health Information 00:15:43.151 ================== 00:15:43.151 Critical Warnings: 00:15:43.151 Available Spare Space: OK 00:15:43.151 Temperature: OK 00:15:43.151 Device Reliability: OK 00:15:43.151 Read Only: No 00:15:43.151 Volatile Memory Backup: OK 00:15:43.151 Current Temperature: 0 Kelvin[2024-04-15 01:50:28.595239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:43.151 [2024-04-15 01:50:28.603058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:43.151 [2024-04-15 01:50:28.603118] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:43.151 [2024-04-15 01:50:28.603137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.151 [2024-04-15 01:50:28.603154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.151 [2024-04-15 01:50:28.603166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.151 [2024-04-15 01:50:28.603176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:43.151 [2024-04-15 01:50:28.603243] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:43.151 [2024-04-15 01:50:28.603263] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:43.151 [2024-04-15 01:50:28.604284] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:43.151 [2024-04-15 01:50:28.604299] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:43.151 [2024-04-15 01:50:28.605259] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:43.151 [2024-04-15 01:50:28.605282] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:43.151 [2024-04-15 01:50:28.605337] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:43.151 [2024-04-15 01:50:28.606533] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:43.151 (-273 Celsius) 00:15:43.151 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:43.151 Available Spare: 0% 00:15:43.151 Available Spare Threshold: 0% 00:15:43.151 Life Percentage Used: 0% 00:15:43.151 Data Units Read: 0 00:15:43.151 Data Units Written: 0 00:15:43.151 Host Read Commands: 0 00:15:43.151 Host Write Commands: 0 00:15:43.151 Controller Busy Time: 0 minutes 00:15:43.151 Power Cycles: 0 00:15:43.151 Power On Hours: 0 hours 00:15:43.151 Unsafe Shutdowns: 0 00:15:43.151 Unrecoverable Media Errors: 0 00:15:43.151 Lifetime Error Log Entries: 0 00:15:43.151 Warning Temperature Time: 0 minutes 00:15:43.151 Critical Temperature Time: 0 minutes 00:15:43.151 00:15:43.151 Number of Queues 00:15:43.151 ================ 00:15:43.151 Number of I/O Submission Queues: 127 00:15:43.151 Number of I/O Completion Queues: 127 00:15:43.151 00:15:43.151 Active Namespaces 00:15:43.151 ================= 00:15:43.151 Namespace ID:1 00:15:43.151 Error Recovery Timeout: Unlimited 00:15:43.151 Command Set Identifier: NVM (00h) 00:15:43.151 Deallocate: Supported 00:15:43.151 Deallocated/Unwritten Error: Not Supported 00:15:43.151 Deallocated Read Value: Unknown 00:15:43.151 Deallocate in Write Zeroes: Not Supported 00:15:43.151 Deallocated Guard Field: 0xFFFF 00:15:43.151 Flush: Supported 00:15:43.151 Reservation: Supported 00:15:43.151 Namespace Sharing Capabilities: Multiple Controllers 00:15:43.151 Size (in LBAs): 131072 (0GiB) 00:15:43.151 Capacity (in LBAs): 131072 (0GiB) 00:15:43.151 Utilization (in LBAs): 131072 (0GiB) 00:15:43.151 NGUID: 3DF26DAF317D4C54AD3FF055E79669E2 00:15:43.151 UUID: 3df26daf-317d-4c54-ad3f-f055e79669e2 00:15:43.151 Thin Provisioning: Not Supported 00:15:43.151 Per-NS Atomic Units: Yes 00:15:43.151 Atomic Boundary Size (Normal): 0 00:15:43.151 Atomic Boundary Size (PFail): 0 00:15:43.151 Atomic Boundary Offset: 0 00:15:43.151 Maximum Single Source Range Length: 65535 00:15:43.151 Maximum Copy Length: 65535 00:15:43.151 Maximum Source Range Count: 1 00:15:43.151 NGUID/EUI64 Never Reused: No 00:15:43.151 Namespace Write Protected: No 00:15:43.151 Number of LBA Formats: 1 00:15:43.152 Current LBA Format: LBA Format #00 00:15:43.152 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:43.152 00:15:43.152 01:50:28 -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:43.152 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.416 Initializing NVMe Controllers 00:15:48.416 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:48.416 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:48.416 Initialization complete. Launching workers. 00:15:48.416 ======================================================== 00:15:48.416 Latency(us) 00:15:48.416 Device Information : IOPS MiB/s Average min max 00:15:48.416 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 36507.93 142.61 3505.87 1157.62 9472.58 00:15:48.416 ======================================================== 00:15:48.416 Total : 36507.93 142.61 3505.87 1157.62 9472.58 00:15:48.416 00:15:48.416 01:50:33 -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:48.416 EAL: No free 2048 kB hugepages reported on node 1 00:15:53.712 Initializing NVMe Controllers 00:15:53.712 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:53.712 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:53.712 Initialization complete. Launching workers. 00:15:53.712 ======================================================== 00:15:53.712 Latency(us) 00:15:53.712 Device Information : IOPS MiB/s Average min max 00:15:53.712 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35489.96 138.63 3605.98 1160.17 7357.61 00:15:53.712 ======================================================== 00:15:53.712 Total : 35489.96 138.63 3605.98 1160.17 7357.61 00:15:53.712 00:15:53.712 01:50:39 -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:53.712 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.972 Initializing NVMe Controllers 00:15:58.972 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.972 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:58.972 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:58.972 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:58.972 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:58.972 Initialization complete. Launching workers. 00:15:58.972 Starting thread on core 2 00:15:58.972 Starting thread on core 3 00:15:58.972 Starting thread on core 1 00:15:58.972 01:50:44 -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:59.229 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.512 Initializing NVMe Controllers 00:16:02.512 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.512 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:16:02.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:16:02.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:16:02.512 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:16:02.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:02.512 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:02.512 Initialization complete. Launching workers. 00:16:02.512 Starting thread on core 1 with urgent priority queue 00:16:02.512 Starting thread on core 2 with urgent priority queue 00:16:02.512 Starting thread on core 3 with urgent priority queue 00:16:02.512 Starting thread on core 0 with urgent priority queue 00:16:02.512 SPDK bdev Controller (SPDK2 ) core 0: 4902.00 IO/s 20.40 secs/100000 ios 00:16:02.512 SPDK bdev Controller (SPDK2 ) core 1: 5202.33 IO/s 19.22 secs/100000 ios 00:16:02.512 SPDK bdev Controller (SPDK2 ) core 2: 5849.33 IO/s 17.10 secs/100000 ios 00:16:02.512 SPDK bdev Controller (SPDK2 ) core 3: 5592.33 IO/s 17.88 secs/100000 ios 00:16:02.512 ======================================================== 00:16:02.512 00:16:02.512 01:50:47 -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.512 EAL: No free 2048 kB hugepages reported on node 1 00:16:02.770 Initializing NVMe Controllers 00:16:02.770 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.770 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:02.770 Namespace ID: 1 size: 0GB 00:16:02.770 Initialization complete. 00:16:02.770 INFO: using host memory buffer for IO 00:16:02.770 Hello world! 00:16:02.770 01:50:48 -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:16:02.770 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.144 Initializing NVMe Controllers 00:16:04.144 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.144 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.144 Initialization complete. Launching workers. 00:16:04.144 submit (in ns) avg, min, max = 7949.3, 3454.4, 4016920.0 00:16:04.144 complete (in ns) avg, min, max = 26439.2, 2054.4, 4016965.6 00:16:04.144 00:16:04.144 Submit histogram 00:16:04.144 ================ 00:16:04.144 Range in us Cumulative Count 00:16:04.144 3.437 - 3.461: 0.0072% ( 1) 00:16:04.144 3.461 - 3.484: 0.1734% ( 23) 00:16:04.144 3.484 - 3.508: 0.5635% ( 54) 00:16:04.144 3.508 - 3.532: 2.1527% ( 220) 00:16:04.144 3.532 - 3.556: 5.4179% ( 452) 00:16:04.144 3.556 - 3.579: 11.8688% ( 893) 00:16:04.144 3.579 - 3.603: 19.7717% ( 1094) 00:16:04.144 3.603 - 3.627: 30.5714% ( 1495) 00:16:04.144 3.627 - 3.650: 39.5796% ( 1247) 00:16:04.144 3.650 - 3.674: 47.3741% ( 1079) 00:16:04.144 3.674 - 3.698: 52.7631% ( 746) 00:16:04.144 3.698 - 3.721: 57.9210% ( 714) 00:16:04.144 3.721 - 3.745: 61.8652% ( 546) 00:16:04.144 3.745 - 3.769: 64.8776% ( 417) 00:16:04.144 3.769 - 3.793: 68.1355% ( 451) 00:16:04.144 3.793 - 3.816: 71.4874% ( 464) 00:16:04.144 3.816 - 3.840: 75.3883% ( 540) 00:16:04.144 3.840 - 3.864: 80.0910% ( 651) 00:16:04.144 3.864 - 3.887: 83.6596% ( 494) 00:16:04.144 3.887 - 3.911: 86.2313% ( 356) 00:16:04.144 3.911 - 3.935: 88.1673% ( 268) 00:16:04.144 3.935 - 3.959: 89.6771% ( 209) 00:16:04.144 3.959 - 3.982: 91.0352% ( 188) 00:16:04.144 3.982 - 4.006: 92.2199% ( 164) 00:16:04.144 4.006 - 4.030: 93.1951% ( 135) 00:16:04.144 4.030 - 4.053: 94.0259% ( 115) 00:16:04.144 4.053 - 4.077: 94.9144% ( 123) 00:16:04.144 4.077 - 4.101: 95.4056% ( 68) 00:16:04.144 4.101 - 4.124: 95.8318% ( 59) 00:16:04.144 4.124 - 4.148: 96.1930% ( 50) 00:16:04.144 4.148 - 4.172: 96.4748% ( 39) 00:16:04.144 4.172 - 4.196: 96.6337% ( 22) 00:16:04.144 4.196 - 4.219: 96.7637% ( 18) 00:16:04.144 4.219 - 4.243: 96.8937% ( 18) 00:16:04.144 4.243 - 4.267: 97.0310% ( 19) 00:16:04.144 4.267 - 4.290: 97.1610% ( 18) 00:16:04.144 4.290 - 4.314: 97.2549% ( 13) 00:16:04.144 4.314 - 4.338: 97.3272% ( 10) 00:16:04.144 4.338 - 4.361: 97.3922% ( 9) 00:16:04.144 4.361 - 4.385: 97.4211% ( 4) 00:16:04.144 4.385 - 4.409: 97.4572% ( 5) 00:16:04.144 4.409 - 4.433: 97.4861% ( 4) 00:16:04.144 4.433 - 4.456: 97.5005% ( 2) 00:16:04.144 4.456 - 4.480: 97.5294% ( 4) 00:16:04.144 4.480 - 4.504: 97.5511% ( 3) 00:16:04.144 4.527 - 4.551: 97.5656% ( 2) 00:16:04.144 4.551 - 4.575: 97.5728% ( 1) 00:16:04.144 4.622 - 4.646: 97.5872% ( 2) 00:16:04.144 4.646 - 4.670: 97.6161% ( 4) 00:16:04.144 4.670 - 4.693: 97.6306% ( 2) 00:16:04.144 4.693 - 4.717: 97.6378% ( 1) 00:16:04.144 4.717 - 4.741: 97.6522% ( 2) 00:16:04.144 4.741 - 4.764: 97.6956% ( 6) 00:16:04.144 4.764 - 4.788: 97.7534% ( 8) 00:16:04.144 4.788 - 4.812: 97.7895% ( 5) 00:16:04.144 4.812 - 4.836: 97.8256% ( 5) 00:16:04.144 4.836 - 4.859: 97.8690% ( 6) 00:16:04.144 4.859 - 4.883: 97.9412% ( 10) 00:16:04.144 4.883 - 4.907: 97.9845% ( 6) 00:16:04.144 4.907 - 4.930: 98.0279% ( 6) 00:16:04.144 4.930 - 4.954: 98.0857% ( 8) 00:16:04.144 4.954 - 4.978: 98.1073% ( 3) 00:16:04.144 4.978 - 5.001: 98.1362% ( 4) 00:16:04.144 5.001 - 5.025: 98.1579% ( 3) 00:16:04.144 5.025 - 5.049: 98.1868% ( 4) 00:16:04.144 5.049 - 5.073: 98.2013% ( 2) 00:16:04.144 5.073 - 5.096: 98.2157% ( 2) 00:16:04.144 5.096 - 5.120: 98.2446% ( 4) 00:16:04.144 5.120 - 5.144: 98.2879% ( 6) 00:16:04.144 5.144 - 5.167: 98.3168% ( 4) 00:16:04.144 5.167 - 5.191: 98.3385% ( 3) 00:16:04.144 5.191 - 5.215: 98.3457% ( 1) 00:16:04.144 5.239 - 5.262: 98.3530% ( 1) 00:16:04.144 5.262 - 5.286: 98.3602% ( 1) 00:16:04.144 5.310 - 5.333: 98.3746% ( 2) 00:16:04.144 5.333 - 5.357: 98.3819% ( 1) 00:16:04.144 5.357 - 5.381: 98.3891% ( 1) 00:16:04.144 5.381 - 5.404: 98.3963% ( 1) 00:16:04.144 5.404 - 5.428: 98.4035% ( 1) 00:16:04.144 5.428 - 5.452: 98.4107% ( 1) 00:16:04.144 5.594 - 5.618: 98.4180% ( 1) 00:16:04.144 5.665 - 5.689: 98.4252% ( 1) 00:16:04.144 5.689 - 5.713: 98.4324% ( 1) 00:16:04.144 5.902 - 5.926: 98.4396% ( 1) 00:16:04.144 5.926 - 5.950: 98.4541% ( 2) 00:16:04.144 6.116 - 6.163: 98.4685% ( 2) 00:16:04.144 6.210 - 6.258: 98.4758% ( 1) 00:16:04.144 6.258 - 6.305: 98.4830% ( 1) 00:16:04.144 6.353 - 6.400: 98.4902% ( 1) 00:16:04.144 6.779 - 6.827: 98.4974% ( 1) 00:16:04.144 6.921 - 6.969: 98.5047% ( 1) 00:16:04.144 7.159 - 7.206: 98.5119% ( 1) 00:16:04.144 7.253 - 7.301: 98.5191% ( 1) 00:16:04.144 7.301 - 7.348: 98.5263% ( 1) 00:16:04.144 7.443 - 7.490: 98.5408% ( 2) 00:16:04.144 7.490 - 7.538: 98.5480% ( 1) 00:16:04.144 7.585 - 7.633: 98.5625% ( 2) 00:16:04.144 7.727 - 7.775: 98.5697% ( 1) 00:16:04.144 7.870 - 7.917: 98.5986% ( 4) 00:16:04.144 7.917 - 7.964: 98.6058% ( 1) 00:16:04.144 8.107 - 8.154: 98.6275% ( 3) 00:16:04.145 8.201 - 8.249: 98.6419% ( 2) 00:16:04.145 8.296 - 8.344: 98.6491% ( 1) 00:16:04.145 8.344 - 8.391: 98.6564% ( 1) 00:16:04.145 8.439 - 8.486: 98.6636% ( 1) 00:16:04.145 8.581 - 8.628: 98.6780% ( 2) 00:16:04.145 8.676 - 8.723: 98.6853% ( 1) 00:16:04.145 8.723 - 8.770: 98.7069% ( 3) 00:16:04.145 8.770 - 8.818: 98.7142% ( 1) 00:16:04.145 8.818 - 8.865: 98.7214% ( 1) 00:16:04.145 8.865 - 8.913: 98.7286% ( 1) 00:16:04.145 8.913 - 8.960: 98.7430% ( 2) 00:16:04.145 9.055 - 9.102: 98.7575% ( 2) 00:16:04.145 9.339 - 9.387: 98.7647% ( 1) 00:16:04.145 9.387 - 9.434: 98.7719% ( 1) 00:16:04.145 9.434 - 9.481: 98.7792% ( 1) 00:16:04.145 9.576 - 9.624: 98.7864% ( 1) 00:16:04.145 9.956 - 10.003: 98.7936% ( 1) 00:16:04.145 10.003 - 10.050: 98.8081% ( 2) 00:16:04.145 10.050 - 10.098: 98.8153% ( 1) 00:16:04.145 10.287 - 10.335: 98.8297% ( 2) 00:16:04.145 10.761 - 10.809: 98.8442% ( 2) 00:16:04.145 11.473 - 11.520: 98.8514% ( 1) 00:16:04.145 11.520 - 11.567: 98.8586% ( 1) 00:16:04.145 11.710 - 11.757: 98.8659% ( 1) 00:16:04.145 11.947 - 11.994: 98.8731% ( 1) 00:16:04.145 12.231 - 12.326: 98.8803% ( 1) 00:16:04.145 12.421 - 12.516: 98.8875% ( 1) 00:16:04.145 12.610 - 12.705: 98.8947% ( 1) 00:16:04.145 12.705 - 12.800: 98.9020% ( 1) 00:16:04.145 12.800 - 12.895: 98.9092% ( 1) 00:16:04.145 13.274 - 13.369: 98.9236% ( 2) 00:16:04.145 13.559 - 13.653: 98.9309% ( 1) 00:16:04.145 13.748 - 13.843: 98.9453% ( 2) 00:16:04.145 14.696 - 14.791: 98.9598% ( 2) 00:16:04.145 17.161 - 17.256: 98.9670% ( 1) 00:16:04.145 17.256 - 17.351: 98.9887% ( 3) 00:16:04.145 17.351 - 17.446: 99.0103% ( 3) 00:16:04.145 17.446 - 17.541: 99.0248% ( 2) 00:16:04.145 17.541 - 17.636: 99.0681% ( 6) 00:16:04.145 17.636 - 17.730: 99.1042% ( 5) 00:16:04.145 17.730 - 17.825: 99.1620% ( 8) 00:16:04.145 17.825 - 17.920: 99.2270% ( 9) 00:16:04.145 17.920 - 18.015: 99.2632% ( 5) 00:16:04.145 18.015 - 18.110: 99.2921% ( 4) 00:16:04.145 18.110 - 18.204: 99.4365% ( 20) 00:16:04.145 18.204 - 18.299: 99.5232% ( 12) 00:16:04.145 18.299 - 18.394: 99.5666% ( 6) 00:16:04.145 18.394 - 18.489: 99.6244% ( 8) 00:16:04.145 18.489 - 18.584: 99.6533% ( 4) 00:16:04.145 18.584 - 18.679: 99.7110% ( 8) 00:16:04.145 18.679 - 18.773: 99.7255% ( 2) 00:16:04.145 18.773 - 18.868: 99.7472% ( 3) 00:16:04.145 18.868 - 18.963: 99.7616% ( 2) 00:16:04.145 18.963 - 19.058: 99.7905% ( 4) 00:16:04.145 19.058 - 19.153: 99.8050% ( 2) 00:16:04.145 19.153 - 19.247: 99.8194% ( 2) 00:16:04.145 19.342 - 19.437: 99.8266% ( 1) 00:16:04.145 19.437 - 19.532: 99.8339% ( 1) 00:16:04.145 19.532 - 19.627: 99.8411% ( 1) 00:16:04.145 19.627 - 19.721: 99.8483% ( 1) 00:16:04.145 19.721 - 19.816: 99.8555% ( 1) 00:16:04.145 19.816 - 19.911: 99.8627% ( 1) 00:16:04.145 22.187 - 22.281: 99.8700% ( 1) 00:16:04.145 22.566 - 22.661: 99.8772% ( 1) 00:16:04.145 22.661 - 22.756: 99.8844% ( 1) 00:16:04.145 24.462 - 24.652: 99.8916% ( 1) 00:16:04.145 28.065 - 28.255: 99.8989% ( 1) 00:16:04.145 3980.705 - 4004.978: 99.9856% ( 12) 00:16:04.145 4004.978 - 4029.250: 100.0000% ( 2) 00:16:04.145 00:16:04.145 Complete histogram 00:16:04.145 ================== 00:16:04.145 Range in us Cumulative Count 00:16:04.145 2.050 - 2.062: 0.2095% ( 29) 00:16:04.145 2.062 - 2.074: 11.3342% ( 1540) 00:16:04.145 2.074 - 2.086: 30.4053% ( 2640) 00:16:04.145 2.086 - 2.098: 34.6746% ( 591) 00:16:04.145 2.098 - 2.110: 48.8767% ( 1966) 00:16:04.145 2.110 - 2.121: 61.1284% ( 1696) 00:16:04.145 2.121 - 2.133: 63.6784% ( 353) 00:16:04.145 2.133 - 2.145: 69.0963% ( 750) 00:16:04.145 2.145 - 2.157: 73.8279% ( 655) 00:16:04.145 2.157 - 2.169: 75.3449% ( 210) 00:16:04.145 2.169 - 2.181: 79.4264% ( 565) 00:16:04.145 2.181 - 2.193: 82.6916% ( 452) 00:16:04.145 2.193 - 2.204: 83.7246% ( 143) 00:16:04.145 2.204 - 2.216: 87.1126% ( 469) 00:16:04.145 2.216 - 2.228: 90.2261% ( 431) 00:16:04.145 2.228 - 2.240: 90.9413% ( 99) 00:16:04.145 2.240 - 2.252: 92.6172% ( 232) 00:16:04.145 2.252 - 2.264: 94.0331% ( 196) 00:16:04.145 2.264 - 2.276: 94.4954% ( 64) 00:16:04.145 2.276 - 2.287: 95.1672% ( 93) 00:16:04.145 2.287 - 2.299: 95.7885% ( 86) 00:16:04.145 2.299 - 2.311: 95.9185% ( 18) 00:16:04.145 2.311 - 2.323: 95.9330% ( 2) 00:16:04.145 2.323 - 2.335: 95.9980% ( 9) 00:16:04.145 2.335 - 2.347: 96.0702% ( 10) 00:16:04.145 2.347 - 2.359: 96.1714% ( 14) 00:16:04.145 2.359 - 2.370: 96.3447% ( 24) 00:16:04.145 2.370 - 2.382: 96.5181% ( 24) 00:16:04.145 2.382 - 2.394: 96.6553% ( 19) 00:16:04.145 2.394 - 2.406: 96.8071% ( 21) 00:16:04.145 2.406 - 2.418: 97.0527% ( 34) 00:16:04.145 2.418 - 2.430: 97.2188% ( 23) 00:16:04.145 2.430 - 2.441: 97.4428% ( 31) 00:16:04.145 2.441 - 2.453: 97.6089% ( 23) 00:16:04.145 2.453 - 2.465: 97.7389% ( 18) 00:16:04.145 2.465 - 2.477: 97.8545% ( 16) 00:16:04.145 2.477 - 2.489: 97.9556% ( 14) 00:16:04.145 2.489 - 2.501: 98.0785% ( 17) 00:16:04.145 2.501 - 2.513: 98.2085% ( 18) 00:16:04.145 2.513 - 2.524: 98.2807% ( 10) 00:16:04.145 2.524 - 2.536: 98.3530% ( 10) 00:16:04.145 2.536 - 2.548: 98.3891% ( 5) 00:16:04.145 2.548 - 2.560: 98.4180% ( 4) 00:16:04.145 2.560 - 2.572: 98.4541% ( 5) 00:16:04.145 2.572 - 2.584: 98.4830% ( 4) 00:16:04.145 2.584 - 2.596: 98.4974% ( 2) 00:16:04.145 2.596 - 2.607: 98.5047% ( 1) 00:16:04.145 2.607 - 2.619: 98.5119% ( 1) 00:16:04.145 2.726 - 2.738: 98.5263% ( 2) 00:16:04.145 2.738 - 2.750: 98.5336% ( 1) 00:16:04.145 2.750 - 2.761: 98.5408% ( 1) 00:16:04.145 3.390 - 3.413: 98.5480% ( 1) 00:16:04.145 3.437 - 3.461: 98.5552% ( 1) 00:16:04.145 3.461 - 3.484: 98.5625% ( 1) 00:16:04.145 3.484 - 3.508: 98.5697% ( 1) 00:16:04.145 3.556 - 3.579: 98.5986% ( 4) 00:16:04.145 3.579 - 3.603: 98.6058% ( 1) 00:16:04.145 3.603 - 3.627: 98.6130% ( 1) 00:16:04.145 3.627 - 3.650: 98.6275% ( 2) 00:16:04.145 3.650 - 3.674: 98.6347% ( 1) 00:16:04.145 3.674 - 3.698: 98.6419% ( 1) 00:16:04.145 3.698 - 3.721: 98.6491% ( 1) 00:16:04.145 3.745 - 3.769: 98.6636% ( 2) 00:16:04.145 3.793 - 3.816: 98.6708% ( 1) 00:16:04.145 3.840 - 3.864: 98.6780% ( 1) 00:16:04.145 3.864 - 3.887: 98.6853% ( 1) 00:16:04.145 3.911 - 3.935: 98.6925% ( 1) 00:16:04.145 3.959 - 3.982: 98.6997% ( 1) 00:16:04.145 3.982 - 4.006: 98.7069% ( 1) 00:16:04.145 5.428 - 5.452: 98.7142% ( 1) 00:16:04.145 5.879 - 5.902: 98.7286% ( 2) 00:16:04.145 5.902 - 5.926: 98.7358% ( 1) 00:16:04.145 5.950 - 5.973: 98.7430% ( 1) 00:16:04.145 6.044 - 6.068: 98.7503% ( 1) 00:16:04.145 6.163 - 6.210: 98.7575% ( 1) 00:16:04.145 6.353 - 6.400: 98.7647% ( 1) 00:16:04.145 6.400 - 6.447: 98.7719% ( 1) 00:16:04.145 6.542 - 6.590: 98.7864% ( 2) 00:16:04.145 6.684 - 6.732: 98.7936% ( 1) 00:16:04.145 6.732 - 6.779: 98.8008% ( 1) 00:16:04.145 6.779 - 6.827: 98.8081% ( 1) 00:16:04.145 6.921 - 6.969: 98.8153% ( 1) 00:16:04.145 7.064 - 7.111: 98.8225% ( 1) 00:16:04.145 7.111 - 7.159: 98.8297% ( 1) 00:16:04.145 7.253 - 7.301: 98.8370% ( 1) 00:16:04.145 7.348 - 7.396: 98.8442% ( 1) 00:16:04.145 7.443 - 7.490: 98.8514% ( 1) 00:16:04.145 8.107 - 8.154: 98.8586% ( 1) 00:16:04.145 8.201 - 8.249: 98.8659% ( 1) 00:16:04.145 8.628 - 8.676: 98.8731% ( 1) 00:16:04.145 8.723 - 8.770: 98.8803% ( 1) 00:16:04.145 10.287 - 10.335: 98.8875% ( 1) 00:16:04.145 10.904 - 10.951: 98.8947% ( 1) 00:16:04.145 12.089 - 12.136: 98.9020% ( 1) 00:16:04.145 15.644 - 15.739: 98.9092% ( 1) 00:16:04.145 15.739 - 15.834: 98.9236% ( 2) 00:16:04.145 15.834 - 15.929: 98.9309% ( 1) 00:16:04.145 16.024 - 16.119: 98.9670% ( 5) 00:16:04.145 16.119 - 16.213: 98.9887% ( 3) 00:16:04.145 16.213 - 16.308: 99.0031% ( 2) 00:16:04.145 16.308 - 16.403: 99.0103% ( 1) 00:16:04.145 16.403 - 16.498: 99.0464% ( 5) 00:16:04.145 16.498 - 16.593: 99.1404% ( 13) 00:16:04.145 16.593 - 16.687: 99.2126% ( 10) 00:16:04.145 16.687 - 16.782: 99.2343% ( 3) 00:16:04.145 16.782 - 16.877: 99.2559% ( 3) 00:16:04.145 16.877 - 16.972: 99.2993% ( 6) 00:16:04.145 16.972 - 17.067: 99.3137% ( 2) 00:16:04.145 17.067 - 17.161: 99.3210% ( 1) 00:16:04.145 17.161 - 17.256: 99.3282% ( 1) 00:16:04.145 17.256 - 17.351: 99.3354% ( 1) 00:16:04.145 17.351 - 17.446: 99.3571% ( 3) 00:16:04.145 17.446 - 17.541: 99.3643% ( 1) 00:16:04.145 17.541 - 17.636: 99.3715% ( 1) 00:16:04.145 17.730 - 17.825: 99.3787% ( 1) 00:16:04.145 17.825 - 17.920: 99.3860% ( 1) 00:16:04.145 22.756 - 22.850: 99.3932% ( 1) 00:16:04.145 3203.982 - 3228.255: 99.4004% ( 1) 00:16:04.145 3980.705 - 4004.978: 99.8555% ( 63) 00:16:04.146 4004.978 - 4029.250: 100.0000% ( 20) 00:16:04.146 00:16:04.146 01:50:49 -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:16:04.146 01:50:49 -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:04.146 01:50:49 -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:16:04.146 01:50:49 -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:16:04.146 01:50:49 -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.404 [ 00:16:04.404 { 00:16:04.404 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:04.404 "subtype": "Discovery", 00:16:04.404 "listen_addresses": [], 00:16:04.404 "allow_any_host": true, 00:16:04.404 "hosts": [] 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:04.404 "subtype": "NVMe", 00:16:04.404 "listen_addresses": [ 00:16:04.404 { 00:16:04.404 "transport": "VFIOUSER", 00:16:04.404 "trtype": "VFIOUSER", 00:16:04.404 "adrfam": "IPv4", 00:16:04.404 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:04.404 "trsvcid": "0" 00:16:04.404 } 00:16:04.404 ], 00:16:04.404 "allow_any_host": true, 00:16:04.404 "hosts": [], 00:16:04.404 "serial_number": "SPDK1", 00:16:04.404 "model_number": "SPDK bdev Controller", 00:16:04.404 "max_namespaces": 32, 00:16:04.404 "min_cntlid": 1, 00:16:04.404 "max_cntlid": 65519, 00:16:04.404 "namespaces": [ 00:16:04.404 { 00:16:04.404 "nsid": 1, 00:16:04.404 "bdev_name": "Malloc1", 00:16:04.404 "name": "Malloc1", 00:16:04.404 "nguid": "78BDAADFE3B344F0B23DF48428C5B6A8", 00:16:04.404 "uuid": "78bdaadf-e3b3-44f0-b23d-f48428c5b6a8" 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "nsid": 2, 00:16:04.404 "bdev_name": "Malloc3", 00:16:04.404 "name": "Malloc3", 00:16:04.404 "nguid": "775C886E476E4B458DBEEF581B3E3D82", 00:16:04.404 "uuid": "775c886e-476e-4b45-8dbe-ef581b3e3d82" 00:16:04.404 } 00:16:04.404 ] 00:16:04.404 }, 00:16:04.404 { 00:16:04.404 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:04.404 "subtype": "NVMe", 00:16:04.404 "listen_addresses": [ 00:16:04.404 { 00:16:04.404 "transport": "VFIOUSER", 00:16:04.404 "trtype": "VFIOUSER", 00:16:04.404 "adrfam": "IPv4", 00:16:04.404 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:04.404 "trsvcid": "0" 00:16:04.404 } 00:16:04.404 ], 00:16:04.404 "allow_any_host": true, 00:16:04.404 "hosts": [], 00:16:04.404 "serial_number": "SPDK2", 00:16:04.404 "model_number": "SPDK bdev Controller", 00:16:04.404 "max_namespaces": 32, 00:16:04.404 "min_cntlid": 1, 00:16:04.404 "max_cntlid": 65519, 00:16:04.404 "namespaces": [ 00:16:04.404 { 00:16:04.404 "nsid": 1, 00:16:04.404 "bdev_name": "Malloc2", 00:16:04.404 "name": "Malloc2", 00:16:04.404 "nguid": "3DF26DAF317D4C54AD3FF055E79669E2", 00:16:04.404 "uuid": "3df26daf-317d-4c54-ad3f-f055e79669e2" 00:16:04.404 } 00:16:04.404 ] 00:16:04.404 } 00:16:04.404 ] 00:16:04.404 01:50:49 -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:04.404 01:50:49 -- target/nvmf_vfio_user.sh@34 -- # aerpid=2137005 00:16:04.405 01:50:49 -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:16:04.405 01:50:49 -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:04.405 01:50:49 -- common/autotest_common.sh@1244 -- # local i=0 00:16:04.405 01:50:49 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.405 01:50:49 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:04.405 01:50:49 -- common/autotest_common.sh@1255 -- # return 0 00:16:04.405 01:50:49 -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:04.405 01:50:49 -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:16:04.405 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.663 Malloc4 00:16:04.663 01:50:50 -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:16:04.920 01:50:50 -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:04.920 Asynchronous Event Request test 00:16:04.920 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.920 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:16:04.920 Registering asynchronous event callbacks... 00:16:04.920 Starting namespace attribute notice tests for all controllers... 00:16:04.920 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:04.920 aer_cb - Changed Namespace 00:16:04.920 Cleaning up... 00:16:05.178 [ 00:16:05.178 { 00:16:05.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:05.178 "subtype": "Discovery", 00:16:05.178 "listen_addresses": [], 00:16:05.178 "allow_any_host": true, 00:16:05.178 "hosts": [] 00:16:05.178 }, 00:16:05.178 { 00:16:05.178 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:05.178 "subtype": "NVMe", 00:16:05.178 "listen_addresses": [ 00:16:05.178 { 00:16:05.178 "transport": "VFIOUSER", 00:16:05.178 "trtype": "VFIOUSER", 00:16:05.178 "adrfam": "IPv4", 00:16:05.178 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:05.178 "trsvcid": "0" 00:16:05.178 } 00:16:05.178 ], 00:16:05.178 "allow_any_host": true, 00:16:05.178 "hosts": [], 00:16:05.178 "serial_number": "SPDK1", 00:16:05.178 "model_number": "SPDK bdev Controller", 00:16:05.178 "max_namespaces": 32, 00:16:05.178 "min_cntlid": 1, 00:16:05.178 "max_cntlid": 65519, 00:16:05.178 "namespaces": [ 00:16:05.178 { 00:16:05.178 "nsid": 1, 00:16:05.178 "bdev_name": "Malloc1", 00:16:05.178 "name": "Malloc1", 00:16:05.178 "nguid": "78BDAADFE3B344F0B23DF48428C5B6A8", 00:16:05.178 "uuid": "78bdaadf-e3b3-44f0-b23d-f48428c5b6a8" 00:16:05.178 }, 00:16:05.178 { 00:16:05.178 "nsid": 2, 00:16:05.178 "bdev_name": "Malloc3", 00:16:05.178 "name": "Malloc3", 00:16:05.178 "nguid": "775C886E476E4B458DBEEF581B3E3D82", 00:16:05.178 "uuid": "775c886e-476e-4b45-8dbe-ef581b3e3d82" 00:16:05.178 } 00:16:05.178 ] 00:16:05.178 }, 00:16:05.178 { 00:16:05.178 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:05.178 "subtype": "NVMe", 00:16:05.178 "listen_addresses": [ 00:16:05.178 { 00:16:05.178 "transport": "VFIOUSER", 00:16:05.178 "trtype": "VFIOUSER", 00:16:05.178 "adrfam": "IPv4", 00:16:05.178 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:05.178 "trsvcid": "0" 00:16:05.178 } 00:16:05.178 ], 00:16:05.178 "allow_any_host": true, 00:16:05.178 "hosts": [], 00:16:05.178 "serial_number": "SPDK2", 00:16:05.178 "model_number": "SPDK bdev Controller", 00:16:05.178 "max_namespaces": 32, 00:16:05.178 "min_cntlid": 1, 00:16:05.178 "max_cntlid": 65519, 00:16:05.178 "namespaces": [ 00:16:05.178 { 00:16:05.178 "nsid": 1, 00:16:05.178 "bdev_name": "Malloc2", 00:16:05.178 "name": "Malloc2", 00:16:05.178 "nguid": "3DF26DAF317D4C54AD3FF055E79669E2", 00:16:05.178 "uuid": "3df26daf-317d-4c54-ad3f-f055e79669e2" 00:16:05.178 }, 00:16:05.178 { 00:16:05.178 "nsid": 2, 00:16:05.178 "bdev_name": "Malloc4", 00:16:05.178 "name": "Malloc4", 00:16:05.178 "nguid": "42BE2B056C394FF3BE4E4BAAFBA3185B", 00:16:05.178 "uuid": "42be2b05-6c39-4ff3-be4e-4baafba3185b" 00:16:05.178 } 00:16:05.178 ] 00:16:05.178 } 00:16:05.178 ] 00:16:05.178 01:50:50 -- target/nvmf_vfio_user.sh@44 -- # wait 2137005 00:16:05.178 01:50:50 -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:16:05.178 01:50:50 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2131229 00:16:05.178 01:50:50 -- common/autotest_common.sh@926 -- # '[' -z 2131229 ']' 00:16:05.178 01:50:50 -- common/autotest_common.sh@930 -- # kill -0 2131229 00:16:05.178 01:50:50 -- common/autotest_common.sh@931 -- # uname 00:16:05.179 01:50:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.179 01:50:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2131229 00:16:05.179 01:50:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.179 01:50:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.179 01:50:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2131229' 00:16:05.179 killing process with pid 2131229 00:16:05.179 01:50:50 -- common/autotest_common.sh@945 -- # kill 2131229 00:16:05.179 [2024-04-15 01:50:50.746463] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:16:05.179 01:50:50 -- common/autotest_common.sh@950 -- # wait 2131229 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2137147 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2137147' 00:16:05.745 Process pid: 2137147 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:05.745 01:50:51 -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2137147 00:16:05.745 01:50:51 -- common/autotest_common.sh@819 -- # '[' -z 2137147 ']' 00:16:05.745 01:50:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:05.745 01:50:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.745 01:50:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:05.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:05.745 01:50:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.745 01:50:51 -- common/autotest_common.sh@10 -- # set +x 00:16:05.745 [2024-04-15 01:50:51.136380] thread.c:2927:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:16:05.745 [2024-04-15 01:50:51.137517] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:05.746 [2024-04-15 01:50:51.137592] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:05.746 EAL: No free 2048 kB hugepages reported on node 1 00:16:05.746 [2024-04-15 01:50:51.197626] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:05.746 [2024-04-15 01:50:51.281593] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:05.746 [2024-04-15 01:50:51.281742] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:05.746 [2024-04-15 01:50:51.281762] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:05.746 [2024-04-15 01:50:51.281776] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:05.746 [2024-04-15 01:50:51.281843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.746 [2024-04-15 01:50:51.281867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:05.746 [2024-04-15 01:50:51.281908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:05.746 [2024-04-15 01:50:51.281911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.746 [2024-04-15 01:50:51.381243] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_0) to intr mode from intr mode. 00:16:05.746 [2024-04-15 01:50:51.381471] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_1) to intr mode from intr mode. 00:16:05.746 [2024-04-15 01:50:51.381769] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_2) to intr mode from intr mode. 00:16:05.746 [2024-04-15 01:50:51.382543] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:05.746 [2024-04-15 01:50:51.382645] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_3) to intr mode from intr mode. 00:16:06.680 01:50:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:06.680 01:50:52 -- common/autotest_common.sh@852 -- # return 0 00:16:06.680 01:50:52 -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:07.613 01:50:53 -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:16:07.872 01:50:53 -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:07.872 01:50:53 -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:07.872 01:50:53 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:07.872 01:50:53 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:07.872 01:50:53 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:08.131 Malloc1 00:16:08.131 01:50:53 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:08.389 01:50:53 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:08.648 01:50:54 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:08.907 01:50:54 -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:08.907 01:50:54 -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:08.907 01:50:54 -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:09.165 Malloc2 00:16:09.165 01:50:54 -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:09.423 01:50:55 -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:09.695 01:50:55 -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:09.992 01:50:55 -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:16:09.992 01:50:55 -- target/nvmf_vfio_user.sh@95 -- # killprocess 2137147 00:16:09.992 01:50:55 -- common/autotest_common.sh@926 -- # '[' -z 2137147 ']' 00:16:09.992 01:50:55 -- common/autotest_common.sh@930 -- # kill -0 2137147 00:16:09.992 01:50:55 -- common/autotest_common.sh@931 -- # uname 00:16:09.992 01:50:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:09.992 01:50:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2137147 00:16:09.992 01:50:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:09.992 01:50:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:09.992 01:50:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2137147' 00:16:09.992 killing process with pid 2137147 00:16:09.992 01:50:55 -- common/autotest_common.sh@945 -- # kill 2137147 00:16:09.992 01:50:55 -- common/autotest_common.sh@950 -- # wait 2137147 00:16:10.251 01:50:55 -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:16:10.251 01:50:55 -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:16:10.251 00:16:10.251 real 0m53.726s 00:16:10.251 user 3m32.262s 00:16:10.251 sys 0m4.780s 00:16:10.251 01:50:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.251 01:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:10.251 ************************************ 00:16:10.251 END TEST nvmf_vfio_user 00:16:10.251 ************************************ 00:16:10.251 01:50:55 -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.251 01:50:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:10.251 01:50:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.251 01:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:10.251 ************************************ 00:16:10.251 START TEST nvmf_vfio_user_nvme_compliance 00:16:10.251 ************************************ 00:16:10.251 01:50:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:16:10.251 * Looking for test storage... 00:16:10.251 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:16:10.251 01:50:55 -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:10.251 01:50:55 -- nvmf/common.sh@7 -- # uname -s 00:16:10.251 01:50:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:10.251 01:50:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:10.251 01:50:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:10.251 01:50:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:10.251 01:50:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:10.251 01:50:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:10.251 01:50:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:10.251 01:50:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:10.251 01:50:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:10.251 01:50:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:10.252 01:50:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.252 01:50:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:10.252 01:50:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:10.252 01:50:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:10.252 01:50:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:10.252 01:50:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:10.252 01:50:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.252 01:50:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.252 01:50:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.252 01:50:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.252 01:50:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.252 01:50:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.252 01:50:55 -- paths/export.sh@5 -- # export PATH 00:16:10.252 01:50:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.252 01:50:55 -- nvmf/common.sh@46 -- # : 0 00:16:10.252 01:50:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:10.252 01:50:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:10.252 01:50:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:10.252 01:50:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:10.252 01:50:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:10.252 01:50:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:10.252 01:50:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:10.252 01:50:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:10.252 01:50:55 -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:10.252 01:50:55 -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:10.252 01:50:55 -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:16:10.252 01:50:55 -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:16:10.252 01:50:55 -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:16:10.252 01:50:55 -- compliance/compliance.sh@20 -- # nvmfpid=2137775 00:16:10.252 01:50:55 -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:10.252 01:50:55 -- compliance/compliance.sh@21 -- # echo 'Process pid: 2137775' 00:16:10.252 Process pid: 2137775 00:16:10.252 01:50:55 -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:10.252 01:50:55 -- compliance/compliance.sh@24 -- # waitforlisten 2137775 00:16:10.252 01:50:55 -- common/autotest_common.sh@819 -- # '[' -z 2137775 ']' 00:16:10.252 01:50:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.252 01:50:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.252 01:50:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.252 01:50:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.252 01:50:55 -- common/autotest_common.sh@10 -- # set +x 00:16:10.512 [2024-04-15 01:50:55.903023] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:10.512 [2024-04-15 01:50:55.903144] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:10.512 EAL: No free 2048 kB hugepages reported on node 1 00:16:10.512 [2024-04-15 01:50:55.969219] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.512 [2024-04-15 01:50:56.056300] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:10.512 [2024-04-15 01:50:56.056447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:10.512 [2024-04-15 01:50:56.056464] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:10.512 [2024-04-15 01:50:56.056477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:10.512 [2024-04-15 01:50:56.056531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.512 [2024-04-15 01:50:56.056559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.512 [2024-04-15 01:50:56.056563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.450 01:50:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.451 01:50:56 -- common/autotest_common.sh@852 -- # return 0 00:16:11.451 01:50:56 -- compliance/compliance.sh@26 -- # sleep 1 00:16:12.390 01:50:57 -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:12.390 01:50:57 -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:16:12.390 01:50:57 -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:12.390 01:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.390 01:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 01:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.390 01:50:57 -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:16:12.390 01:50:57 -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:12.390 01:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.390 01:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 malloc0 00:16:12.390 01:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.390 01:50:57 -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:16:12.390 01:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.390 01:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 01:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.390 01:50:57 -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:12.390 01:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.390 01:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 01:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.390 01:50:57 -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:12.390 01:50:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.390 01:50:57 -- common/autotest_common.sh@10 -- # set +x 00:16:12.390 01:50:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.390 01:50:57 -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:16:12.390 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.650 00:16:12.650 00:16:12.650 CUnit - A unit testing framework for C - Version 2.1-3 00:16:12.650 http://cunit.sourceforge.net/ 00:16:12.650 00:16:12.650 00:16:12.650 Suite: nvme_compliance 00:16:12.650 Test: admin_identify_ctrlr_verify_dptr ...[2024-04-15 01:50:58.119043] vfio_user.c: 789:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:16:12.650 [2024-04-15 01:50:58.119117] vfio_user.c:5484:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:16:12.650 [2024-04-15 01:50:58.119132] vfio_user.c:5576:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:16:12.650 passed 00:16:12.650 Test: admin_identify_ctrlr_verify_fused ...passed 00:16:12.908 Test: admin_identify_ns ...[2024-04-15 01:50:58.357063] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:16:12.908 [2024-04-15 01:50:58.365091] ctrlr.c:2598:_nvmf_subsystem_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:16:12.908 passed 00:16:12.908 Test: admin_get_features_mandatory_features ...passed 00:16:13.168 Test: admin_get_features_optional_features ...passed 00:16:13.168 Test: admin_set_features_number_of_queues ...passed 00:16:13.428 Test: admin_get_log_page_mandatory_logs ...passed 00:16:13.428 Test: admin_get_log_page_with_lpo ...[2024-04-15 01:50:58.991060] ctrlr.c:2546:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:16:13.428 passed 00:16:13.687 Test: fabric_property_get ...passed 00:16:13.687 Test: admin_delete_io_sq_use_admin_qid ...[2024-04-15 01:50:59.173761] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:16:13.687 passed 00:16:13.947 Test: admin_delete_io_sq_delete_sq_twice ...[2024-04-15 01:50:59.343074] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.947 [2024-04-15 01:50:59.359057] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:13.947 passed 00:16:13.947 Test: admin_delete_io_cq_use_admin_qid ...[2024-04-15 01:50:59.447631] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:16:13.947 passed 00:16:14.205 Test: admin_delete_io_cq_delete_cq_first ...[2024-04-15 01:50:59.610068] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:14.205 [2024-04-15 01:50:59.634070] vfio_user.c:2300:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:16:14.205 passed 00:16:14.206 Test: admin_create_io_cq_verify_iv_pc ...[2024-04-15 01:50:59.724137] vfio_user.c:2150:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:16:14.206 [2024-04-15 01:50:59.724177] vfio_user.c:2144:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:16:14.206 passed 00:16:14.464 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-04-15 01:50:59.900054] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:16:14.464 [2024-04-15 01:50:59.908070] vfio_user.c:2231:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:16:14.464 [2024-04-15 01:50:59.916053] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:16:14.464 [2024-04-15 01:50:59.924072] vfio_user.c:2031:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:16:14.464 passed 00:16:14.464 Test: admin_create_io_sq_verify_pc ...[2024-04-15 01:51:00.052073] vfio_user.c:2044:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:16:14.723 passed 00:16:15.662 Test: admin_create_io_qp_max_qps ...[2024-04-15 01:51:01.252062] nvme_ctrlr.c:5304:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:16.229 passed 00:16:16.229 Test: admin_create_io_sq_shared_cq ...[2024-04-15 01:51:01.865070] vfio_user.c:2310:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:16.489 passed 00:16:16.489 00:16:16.489 Run Summary: Type Total Ran Passed Failed Inactive 00:16:16.489 suites 1 1 n/a 0 0 00:16:16.489 tests 18 18 18 0 0 00:16:16.489 asserts 360 360 360 0 n/a 00:16:16.489 00:16:16.489 Elapsed time = 1.568 seconds 00:16:16.489 01:51:01 -- compliance/compliance.sh@42 -- # killprocess 2137775 00:16:16.489 01:51:01 -- common/autotest_common.sh@926 -- # '[' -z 2137775 ']' 00:16:16.489 01:51:01 -- common/autotest_common.sh@930 -- # kill -0 2137775 00:16:16.489 01:51:01 -- common/autotest_common.sh@931 -- # uname 00:16:16.489 01:51:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.489 01:51:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2137775 00:16:16.489 01:51:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:16.489 01:51:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:16.489 01:51:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2137775' 00:16:16.489 killing process with pid 2137775 00:16:16.489 01:51:01 -- common/autotest_common.sh@945 -- # kill 2137775 00:16:16.489 01:51:01 -- common/autotest_common.sh@950 -- # wait 2137775 00:16:16.752 01:51:02 -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:16.752 01:51:02 -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:16.752 00:16:16.752 real 0m6.431s 00:16:16.752 user 0m18.521s 00:16:16.753 sys 0m0.569s 00:16:16.753 01:51:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.753 01:51:02 -- common/autotest_common.sh@10 -- # set +x 00:16:16.753 ************************************ 00:16:16.753 END TEST nvmf_vfio_user_nvme_compliance 00:16:16.753 ************************************ 00:16:16.753 01:51:02 -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.753 01:51:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:16.753 01:51:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.753 01:51:02 -- common/autotest_common.sh@10 -- # set +x 00:16:16.753 ************************************ 00:16:16.753 START TEST nvmf_vfio_user_fuzz 00:16:16.753 ************************************ 00:16:16.753 01:51:02 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:16.753 * Looking for test storage... 00:16:16.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:16.753 01:51:02 -- nvmf/common.sh@7 -- # uname -s 00:16:16.753 01:51:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.753 01:51:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.753 01:51:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.753 01:51:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.753 01:51:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.753 01:51:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.753 01:51:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.753 01:51:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.753 01:51:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.753 01:51:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.753 01:51:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.753 01:51:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.753 01:51:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.753 01:51:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.753 01:51:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:16.753 01:51:02 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:16.753 01:51:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.753 01:51:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.753 01:51:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.753 01:51:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.753 01:51:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.753 01:51:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.753 01:51:02 -- paths/export.sh@5 -- # export PATH 00:16:16.753 01:51:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.753 01:51:02 -- nvmf/common.sh@46 -- # : 0 00:16:16.753 01:51:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.753 01:51:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.753 01:51:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.753 01:51:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.753 01:51:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.753 01:51:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.753 01:51:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.753 01:51:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2138756 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2138756' 00:16:16.753 Process pid: 2138756 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:16.753 01:51:02 -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2138756 00:16:16.753 01:51:02 -- common/autotest_common.sh@819 -- # '[' -z 2138756 ']' 00:16:16.753 01:51:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.753 01:51:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:16.753 01:51:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.753 01:51:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:16.753 01:51:02 -- common/autotest_common.sh@10 -- # set +x 00:16:17.694 01:51:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:17.694 01:51:03 -- common/autotest_common.sh@852 -- # return 0 00:16:17.694 01:51:03 -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:19.073 01:51:04 -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:19.073 01:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.073 01:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:19.073 01:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.073 01:51:04 -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:19.073 01:51:04 -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:19.073 01:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.074 01:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:19.074 malloc0 00:16:19.074 01:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.074 01:51:04 -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:19.074 01:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.074 01:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:19.074 01:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.074 01:51:04 -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:19.074 01:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.074 01:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:19.074 01:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.074 01:51:04 -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:19.074 01:51:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.074 01:51:04 -- common/autotest_common.sh@10 -- # set +x 00:16:19.074 01:51:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.074 01:51:04 -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:19.074 01:51:04 -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/vfio_user_fuzz -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:51.186 Fuzzing completed. Shutting down the fuzz application 00:16:51.186 00:16:51.186 Dumping successful admin opcodes: 00:16:51.186 8, 9, 10, 24, 00:16:51.186 Dumping successful io opcodes: 00:16:51.186 0, 00:16:51.186 NS: 0x200003a1ef00 I/O qp, Total commands completed: 566739, total successful commands: 2179, random_seed: 1468565760 00:16:51.186 NS: 0x200003a1ef00 admin qp, Total commands completed: 140967, total successful commands: 1144, random_seed: 3520127488 00:16:51.186 01:51:35 -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:51.186 01:51:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:51.186 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:16:51.186 01:51:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:51.186 01:51:35 -- target/vfio_user_fuzz.sh@46 -- # killprocess 2138756 00:16:51.186 01:51:35 -- common/autotest_common.sh@926 -- # '[' -z 2138756 ']' 00:16:51.186 01:51:35 -- common/autotest_common.sh@930 -- # kill -0 2138756 00:16:51.186 01:51:35 -- common/autotest_common.sh@931 -- # uname 00:16:51.186 01:51:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:51.186 01:51:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2138756 00:16:51.186 01:51:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:51.186 01:51:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:51.186 01:51:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2138756' 00:16:51.186 killing process with pid 2138756 00:16:51.186 01:51:35 -- common/autotest_common.sh@945 -- # kill 2138756 00:16:51.187 01:51:35 -- common/autotest_common.sh@950 -- # wait 2138756 00:16:51.187 01:51:35 -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:51.187 01:51:35 -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:51.187 00:16:51.187 real 0m33.204s 00:16:51.187 user 0m34.282s 00:16:51.187 sys 0m26.120s 00:16:51.187 01:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:51.187 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:16:51.187 ************************************ 00:16:51.187 END TEST nvmf_vfio_user_fuzz 00:16:51.187 ************************************ 00:16:51.187 01:51:35 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.187 01:51:35 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:51.187 01:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:51.187 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:16:51.187 ************************************ 00:16:51.187 START TEST nvmf_host_management 00:16:51.187 ************************************ 00:16:51.187 01:51:35 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:51.187 * Looking for test storage... 00:16:51.187 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:51.187 01:51:35 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:51.187 01:51:35 -- nvmf/common.sh@7 -- # uname -s 00:16:51.187 01:51:35 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:51.187 01:51:35 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:51.187 01:51:35 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:51.187 01:51:35 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:51.187 01:51:35 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:51.187 01:51:35 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:51.187 01:51:35 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:51.187 01:51:35 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:51.187 01:51:35 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:51.187 01:51:35 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:51.187 01:51:35 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.187 01:51:35 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.187 01:51:35 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:51.187 01:51:35 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:51.187 01:51:35 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:51.187 01:51:35 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:51.187 01:51:35 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:51.187 01:51:35 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:51.187 01:51:35 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:51.187 01:51:35 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.187 01:51:35 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.187 01:51:35 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.187 01:51:35 -- paths/export.sh@5 -- # export PATH 00:16:51.187 01:51:35 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:51.187 01:51:35 -- nvmf/common.sh@46 -- # : 0 00:16:51.187 01:51:35 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:51.187 01:51:35 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:51.187 01:51:35 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:51.187 01:51:35 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:51.187 01:51:35 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:51.187 01:51:35 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:51.187 01:51:35 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:51.187 01:51:35 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:51.187 01:51:35 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:51.187 01:51:35 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:51.187 01:51:35 -- target/host_management.sh@104 -- # nvmftestinit 00:16:51.187 01:51:35 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:51.187 01:51:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:51.187 01:51:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:51.187 01:51:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:51.187 01:51:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:51.187 01:51:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:51.187 01:51:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:51.187 01:51:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:51.187 01:51:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:51.187 01:51:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:51.187 01:51:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:51.187 01:51:35 -- common/autotest_common.sh@10 -- # set +x 00:16:52.122 01:51:37 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:52.122 01:51:37 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:52.122 01:51:37 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:52.122 01:51:37 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:52.122 01:51:37 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:52.122 01:51:37 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:52.122 01:51:37 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:52.122 01:51:37 -- nvmf/common.sh@294 -- # net_devs=() 00:16:52.123 01:51:37 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:52.123 01:51:37 -- nvmf/common.sh@295 -- # e810=() 00:16:52.123 01:51:37 -- nvmf/common.sh@295 -- # local -ga e810 00:16:52.123 01:51:37 -- nvmf/common.sh@296 -- # x722=() 00:16:52.123 01:51:37 -- nvmf/common.sh@296 -- # local -ga x722 00:16:52.123 01:51:37 -- nvmf/common.sh@297 -- # mlx=() 00:16:52.123 01:51:37 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:52.123 01:51:37 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:52.123 01:51:37 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.123 01:51:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:52.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:52.123 01:51:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:52.123 01:51:37 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:52.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:52.123 01:51:37 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.123 01:51:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.123 01:51:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.123 01:51:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:52.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:52.123 01:51:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:52.123 01:51:37 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:52.123 01:51:37 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:52.123 01:51:37 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:52.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:52.123 01:51:37 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:52.123 01:51:37 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:16:52.123 01:51:37 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:52.123 01:51:37 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:52.123 01:51:37 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:16:52.123 01:51:37 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:52.123 01:51:37 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:52.123 01:51:37 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:16:52.123 01:51:37 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:52.123 01:51:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:52.123 01:51:37 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:16:52.123 01:51:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:16:52.123 01:51:37 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:16:52.123 01:51:37 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:52.123 01:51:37 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:52.123 01:51:37 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:52.123 01:51:37 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:16:52.123 01:51:37 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:52.123 01:51:37 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:52.123 01:51:37 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:52.123 01:51:37 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:16:52.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:52.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:16:52.123 00:16:52.123 --- 10.0.0.2 ping statistics --- 00:16:52.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.123 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:16:52.123 01:51:37 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:52.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:52.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:16:52.123 00:16:52.123 --- 10.0.0.1 ping statistics --- 00:16:52.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:52.123 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:16:52.123 01:51:37 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:52.123 01:51:37 -- nvmf/common.sh@410 -- # return 0 00:16:52.123 01:51:37 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:52.123 01:51:37 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:52.123 01:51:37 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:52.123 01:51:37 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:52.123 01:51:37 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:52.123 01:51:37 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:52.123 01:51:37 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:16:52.123 01:51:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:52.123 01:51:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:52.123 01:51:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.123 ************************************ 00:16:52.123 START TEST nvmf_host_management 00:16:52.123 ************************************ 00:16:52.123 01:51:37 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:16:52.123 01:51:37 -- target/host_management.sh@69 -- # starttarget 00:16:52.123 01:51:37 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:52.123 01:51:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:52.123 01:51:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:52.123 01:51:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.123 01:51:37 -- nvmf/common.sh@469 -- # nvmfpid=2144968 00:16:52.123 01:51:37 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:52.123 01:51:37 -- nvmf/common.sh@470 -- # waitforlisten 2144968 00:16:52.123 01:51:37 -- common/autotest_common.sh@819 -- # '[' -z 2144968 ']' 00:16:52.123 01:51:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:52.123 01:51:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:52.123 01:51:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:52.123 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:52.123 01:51:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:52.123 01:51:37 -- common/autotest_common.sh@10 -- # set +x 00:16:52.123 [2024-04-15 01:51:37.757723] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:52.123 [2024-04-15 01:51:37.757791] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:52.383 EAL: No free 2048 kB hugepages reported on node 1 00:16:52.383 [2024-04-15 01:51:37.825741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:52.383 [2024-04-15 01:51:37.918402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:52.383 [2024-04-15 01:51:37.918541] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:52.383 [2024-04-15 01:51:37.918558] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:52.383 [2024-04-15 01:51:37.918570] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:52.383 [2024-04-15 01:51:37.918660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:52.383 [2024-04-15 01:51:37.918723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:52.383 [2024-04-15 01:51:37.918756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:52.383 [2024-04-15 01:51:37.918758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.318 01:51:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.318 01:51:38 -- common/autotest_common.sh@852 -- # return 0 00:16:53.318 01:51:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:53.318 01:51:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 01:51:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:53.318 01:51:38 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:53.318 01:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 [2024-04-15 01:51:38.743623] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:53.318 01:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.318 01:51:38 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:53.318 01:51:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 01:51:38 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:53.318 01:51:38 -- target/host_management.sh@23 -- # cat 00:16:53.318 01:51:38 -- target/host_management.sh@30 -- # rpc_cmd 00:16:53.318 01:51:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 Malloc0 00:16:53.318 [2024-04-15 01:51:38.804376] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.318 01:51:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:53.318 01:51:38 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:53.318 01:51:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 01:51:38 -- target/host_management.sh@73 -- # perfpid=2145150 00:16:53.318 01:51:38 -- target/host_management.sh@74 -- # waitforlisten 2145150 /var/tmp/bdevperf.sock 00:16:53.318 01:51:38 -- common/autotest_common.sh@819 -- # '[' -z 2145150 ']' 00:16:53.318 01:51:38 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:53.318 01:51:38 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:53.318 01:51:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:53.318 01:51:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.318 01:51:38 -- nvmf/common.sh@520 -- # config=() 00:16:53.318 01:51:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:53.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:53.318 01:51:38 -- nvmf/common.sh@520 -- # local subsystem config 00:16:53.318 01:51:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.318 01:51:38 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:53.318 01:51:38 -- common/autotest_common.sh@10 -- # set +x 00:16:53.318 01:51:38 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:53.318 { 00:16:53.318 "params": { 00:16:53.318 "name": "Nvme$subsystem", 00:16:53.318 "trtype": "$TEST_TRANSPORT", 00:16:53.318 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.318 "adrfam": "ipv4", 00:16:53.318 "trsvcid": "$NVMF_PORT", 00:16:53.318 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.318 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.318 "hdgst": ${hdgst:-false}, 00:16:53.318 "ddgst": ${ddgst:-false} 00:16:53.318 }, 00:16:53.318 "method": "bdev_nvme_attach_controller" 00:16:53.318 } 00:16:53.318 EOF 00:16:53.318 )") 00:16:53.318 01:51:38 -- nvmf/common.sh@542 -- # cat 00:16:53.318 01:51:38 -- nvmf/common.sh@544 -- # jq . 00:16:53.318 01:51:38 -- nvmf/common.sh@545 -- # IFS=, 00:16:53.318 01:51:38 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:53.318 "params": { 00:16:53.318 "name": "Nvme0", 00:16:53.318 "trtype": "tcp", 00:16:53.318 "traddr": "10.0.0.2", 00:16:53.318 "adrfam": "ipv4", 00:16:53.318 "trsvcid": "4420", 00:16:53.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:53.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:53.318 "hdgst": false, 00:16:53.318 "ddgst": false 00:16:53.318 }, 00:16:53.318 "method": "bdev_nvme_attach_controller" 00:16:53.318 }' 00:16:53.318 [2024-04-15 01:51:38.875495] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:53.318 [2024-04-15 01:51:38.875583] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145150 ] 00:16:53.318 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.318 [2024-04-15 01:51:38.937770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.577 [2024-04-15 01:51:39.022899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.835 Running I/O for 10 seconds... 00:16:54.404 01:51:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:54.404 01:51:39 -- common/autotest_common.sh@852 -- # return 0 00:16:54.404 01:51:39 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:54.404 01:51:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.404 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 01:51:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.404 01:51:39 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:54.404 01:51:39 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:54.404 01:51:39 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:54.404 01:51:39 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:54.404 01:51:39 -- target/host_management.sh@52 -- # local ret=1 00:16:54.404 01:51:39 -- target/host_management.sh@53 -- # local i 00:16:54.404 01:51:39 -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:54.404 01:51:39 -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:54.404 01:51:39 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:54.404 01:51:39 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:54.404 01:51:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.404 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 01:51:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.404 01:51:39 -- target/host_management.sh@55 -- # read_io_count=964 00:16:54.404 01:51:39 -- target/host_management.sh@58 -- # '[' 964 -ge 100 ']' 00:16:54.404 01:51:39 -- target/host_management.sh@59 -- # ret=0 00:16:54.404 01:51:39 -- target/host_management.sh@60 -- # break 00:16:54.404 01:51:39 -- target/host_management.sh@64 -- # return 0 00:16:54.404 01:51:39 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:54.404 01:51:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.404 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:16:54.404 [2024-04-15 01:51:39.880078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.404 [2024-04-15 01:51:39.880190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.404 [2024-04-15 01:51:39.880207] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880260] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880273] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880345] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880369] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880429] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880455] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880492] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880517] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880550] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880563] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880575] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880588] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880613] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880625] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880677] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880730] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880743] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880809] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880835] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880862] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880874] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.880984] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x127cb20 is same with the state(5) to be set 00:16:54.405 [2024-04-15 01:51:39.881869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.881911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.881938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.881956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.881974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.881989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:130944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.405 [2024-04-15 01:51:39.882255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.405 [2024-04-15 01:51:39.882272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:6784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:2304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:7424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.882982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.882999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:4352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.406 [2024-04-15 01:51:39.883650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.406 [2024-04-15 01:51:39.883666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.883971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.883988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:5504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:54.407 [2024-04-15 01:51:39.884217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:54.407 [2024-04-15 01:51:39.884234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1873c00 is same with the state(5) to be set 00:16:54.407 [2024-04-15 01:51:39.884306] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1873c00 was disconnected and freed. reset controller. 00:16:54.407 01:51:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.407 01:51:39 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:54.407 01:51:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:54.407 01:51:39 -- common/autotest_common.sh@10 -- # set +x 00:16:54.407 [2024-04-15 01:51:39.885439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:54.407 task offset: 5760 on job bdev=Nvme0n1 fails 00:16:54.407 00:16:54.407 Latency(us) 00:16:54.407 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:54.407 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:54.407 Job: Nvme0n1 ended in about 0.56 seconds with error 00:16:54.407 Verification LBA range: start 0x0 length 0x400 00:16:54.407 Nvme0n1 : 0.56 1854.18 115.89 113.67 0.00 32149.64 6796.33 33010.73 00:16:54.407 =================================================================================================================== 00:16:54.407 Total : 1854.18 115.89 113.67 0.00 32149.64 6796.33 33010.73 00:16:54.407 [2024-04-15 01:51:39.887399] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:54.407 [2024-04-15 01:51:39.887428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876030 (9): Bad file descriptor 00:16:54.407 01:51:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:54.407 01:51:39 -- target/host_management.sh@87 -- # sleep 1 00:16:54.407 [2024-04-15 01:51:39.903527] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:55.344 01:51:40 -- target/host_management.sh@91 -- # kill -9 2145150 00:16:55.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2145150) - No such process 00:16:55.344 01:51:40 -- target/host_management.sh@91 -- # true 00:16:55.344 01:51:40 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:55.344 01:51:40 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:55.344 01:51:40 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:55.344 01:51:40 -- nvmf/common.sh@520 -- # config=() 00:16:55.344 01:51:40 -- nvmf/common.sh@520 -- # local subsystem config 00:16:55.344 01:51:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:55.344 01:51:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:55.344 { 00:16:55.344 "params": { 00:16:55.344 "name": "Nvme$subsystem", 00:16:55.344 "trtype": "$TEST_TRANSPORT", 00:16:55.344 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:55.344 "adrfam": "ipv4", 00:16:55.344 "trsvcid": "$NVMF_PORT", 00:16:55.344 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:55.344 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:55.344 "hdgst": ${hdgst:-false}, 00:16:55.344 "ddgst": ${ddgst:-false} 00:16:55.344 }, 00:16:55.344 "method": "bdev_nvme_attach_controller" 00:16:55.344 } 00:16:55.344 EOF 00:16:55.344 )") 00:16:55.344 01:51:40 -- nvmf/common.sh@542 -- # cat 00:16:55.344 01:51:40 -- nvmf/common.sh@544 -- # jq . 00:16:55.344 01:51:40 -- nvmf/common.sh@545 -- # IFS=, 00:16:55.344 01:51:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:55.344 "params": { 00:16:55.344 "name": "Nvme0", 00:16:55.344 "trtype": "tcp", 00:16:55.344 "traddr": "10.0.0.2", 00:16:55.344 "adrfam": "ipv4", 00:16:55.344 "trsvcid": "4420", 00:16:55.344 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:55.344 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:55.344 "hdgst": false, 00:16:55.344 "ddgst": false 00:16:55.344 }, 00:16:55.344 "method": "bdev_nvme_attach_controller" 00:16:55.344 }' 00:16:55.344 [2024-04-15 01:51:40.933967] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:16:55.344 [2024-04-15 01:51:40.934072] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145433 ] 00:16:55.344 EAL: No free 2048 kB hugepages reported on node 1 00:16:55.603 [2024-04-15 01:51:40.995026] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.603 [2024-04-15 01:51:41.078384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.862 Running I/O for 1 seconds... 00:16:56.798 00:16:56.798 Latency(us) 00:16:56.798 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.798 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:56.798 Verification LBA range: start 0x0 length 0x400 00:16:56.798 Nvme0n1 : 1.02 2083.33 130.21 0.00 0.00 30320.92 3543.80 39418.69 00:16:56.798 =================================================================================================================== 00:16:56.798 Total : 2083.33 130.21 0.00 0.00 30320.92 3543.80 39418.69 00:16:57.056 01:51:42 -- target/host_management.sh@101 -- # stoptarget 00:16:57.056 01:51:42 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:57.056 01:51:42 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:57.056 01:51:42 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:57.056 01:51:42 -- target/host_management.sh@40 -- # nvmftestfini 00:16:57.056 01:51:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:57.056 01:51:42 -- nvmf/common.sh@116 -- # sync 00:16:57.056 01:51:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:57.056 01:51:42 -- nvmf/common.sh@119 -- # set +e 00:16:57.056 01:51:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:57.056 01:51:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:57.056 rmmod nvme_tcp 00:16:57.056 rmmod nvme_fabrics 00:16:57.056 rmmod nvme_keyring 00:16:57.315 01:51:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:57.315 01:51:42 -- nvmf/common.sh@123 -- # set -e 00:16:57.315 01:51:42 -- nvmf/common.sh@124 -- # return 0 00:16:57.315 01:51:42 -- nvmf/common.sh@477 -- # '[' -n 2144968 ']' 00:16:57.315 01:51:42 -- nvmf/common.sh@478 -- # killprocess 2144968 00:16:57.315 01:51:42 -- common/autotest_common.sh@926 -- # '[' -z 2144968 ']' 00:16:57.315 01:51:42 -- common/autotest_common.sh@930 -- # kill -0 2144968 00:16:57.315 01:51:42 -- common/autotest_common.sh@931 -- # uname 00:16:57.315 01:51:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:57.315 01:51:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2144968 00:16:57.315 01:51:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:57.315 01:51:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:57.315 01:51:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2144968' 00:16:57.315 killing process with pid 2144968 00:16:57.315 01:51:42 -- common/autotest_common.sh@945 -- # kill 2144968 00:16:57.315 01:51:42 -- common/autotest_common.sh@950 -- # wait 2144968 00:16:57.575 [2024-04-15 01:51:42.969918] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:57.575 01:51:43 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:57.575 01:51:43 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:57.575 01:51:43 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:57.575 01:51:43 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:57.575 01:51:43 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:57.575 01:51:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:57.575 01:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:57.575 01:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.484 01:51:45 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:16:59.484 00:16:59.484 real 0m7.331s 00:16:59.484 user 0m21.871s 00:16:59.484 sys 0m1.444s 00:16:59.484 01:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.484 01:51:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.485 ************************************ 00:16:59.485 END TEST nvmf_host_management 00:16:59.485 ************************************ 00:16:59.485 01:51:45 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:16:59.485 00:16:59.485 real 0m9.567s 00:16:59.485 user 0m22.686s 00:16:59.485 sys 0m2.896s 00:16:59.485 01:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:59.485 01:51:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.485 ************************************ 00:16:59.485 END TEST nvmf_host_management 00:16:59.485 ************************************ 00:16:59.485 01:51:45 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:59.485 01:51:45 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:59.485 01:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:59.485 01:51:45 -- common/autotest_common.sh@10 -- # set +x 00:16:59.485 ************************************ 00:16:59.485 START TEST nvmf_lvol 00:16:59.485 ************************************ 00:16:59.485 01:51:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:59.485 * Looking for test storage... 00:16:59.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:59.743 01:51:45 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:59.743 01:51:45 -- nvmf/common.sh@7 -- # uname -s 00:16:59.743 01:51:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:59.743 01:51:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:59.743 01:51:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:59.743 01:51:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:59.744 01:51:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:59.744 01:51:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:59.744 01:51:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:59.744 01:51:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:59.744 01:51:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:59.744 01:51:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:59.744 01:51:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.744 01:51:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:59.744 01:51:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:59.744 01:51:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:59.744 01:51:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:59.744 01:51:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:59.744 01:51:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:59.744 01:51:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:59.744 01:51:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:59.744 01:51:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.744 01:51:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.744 01:51:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.744 01:51:45 -- paths/export.sh@5 -- # export PATH 00:16:59.744 01:51:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:59.744 01:51:45 -- nvmf/common.sh@46 -- # : 0 00:16:59.744 01:51:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:59.744 01:51:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:59.744 01:51:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:59.744 01:51:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:59.744 01:51:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:59.744 01:51:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:59.744 01:51:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:59.744 01:51:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:59.744 01:51:45 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:59.744 01:51:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:59.744 01:51:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:59.744 01:51:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:59.744 01:51:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:59.744 01:51:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:59.744 01:51:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:59.744 01:51:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:59.744 01:51:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:59.744 01:51:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:59.744 01:51:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:59.744 01:51:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:59.744 01:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:01.646 01:51:47 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.646 01:51:47 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:01.646 01:51:47 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:01.646 01:51:47 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:01.646 01:51:47 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:01.646 01:51:47 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:01.646 01:51:47 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:01.646 01:51:47 -- nvmf/common.sh@294 -- # net_devs=() 00:17:01.646 01:51:47 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:01.646 01:51:47 -- nvmf/common.sh@295 -- # e810=() 00:17:01.646 01:51:47 -- nvmf/common.sh@295 -- # local -ga e810 00:17:01.646 01:51:47 -- nvmf/common.sh@296 -- # x722=() 00:17:01.646 01:51:47 -- nvmf/common.sh@296 -- # local -ga x722 00:17:01.646 01:51:47 -- nvmf/common.sh@297 -- # mlx=() 00:17:01.646 01:51:47 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:01.646 01:51:47 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.646 01:51:47 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:01.646 01:51:47 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:01.646 01:51:47 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:01.646 01:51:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.646 01:51:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:01.646 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:01.646 01:51:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:01.646 01:51:47 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:01.646 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:01.646 01:51:47 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:01.646 01:51:47 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:01.646 01:51:47 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:01.647 01:51:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.647 01:51:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.647 01:51:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.647 01:51:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.647 01:51:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:01.647 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:01.647 01:51:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.647 01:51:47 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:01.647 01:51:47 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.647 01:51:47 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:01.647 01:51:47 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.647 01:51:47 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:01.647 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:01.647 01:51:47 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.647 01:51:47 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:01.647 01:51:47 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:01.647 01:51:47 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:01.647 01:51:47 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:01.647 01:51:47 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:01.647 01:51:47 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.647 01:51:47 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.647 01:51:47 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.647 01:51:47 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:01.647 01:51:47 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.647 01:51:47 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.647 01:51:47 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:01.647 01:51:47 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.647 01:51:47 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.647 01:51:47 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:01.647 01:51:47 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:01.647 01:51:47 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.647 01:51:47 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.647 01:51:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.647 01:51:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.647 01:51:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:01.647 01:51:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.647 01:51:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.647 01:51:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.647 01:51:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:01.647 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.647 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:17:01.647 00:17:01.647 --- 10.0.0.2 ping statistics --- 00:17:01.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.647 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:01.647 01:51:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:17:01.647 00:17:01.647 --- 10.0.0.1 ping statistics --- 00:17:01.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.647 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:17:01.647 01:51:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.647 01:51:47 -- nvmf/common.sh@410 -- # return 0 00:17:01.647 01:51:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:01.647 01:51:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.647 01:51:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:01.647 01:51:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:01.647 01:51:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.647 01:51:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:01.647 01:51:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:01.647 01:51:47 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:17:01.647 01:51:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:01.647 01:51:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:01.647 01:51:47 -- common/autotest_common.sh@10 -- # set +x 00:17:01.647 01:51:47 -- nvmf/common.sh@469 -- # nvmfpid=2147543 00:17:01.647 01:51:47 -- nvmf/common.sh@470 -- # waitforlisten 2147543 00:17:01.647 01:51:47 -- common/autotest_common.sh@819 -- # '[' -z 2147543 ']' 00:17:01.647 01:51:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:01.647 01:51:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.647 01:51:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:01.647 01:51:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:01.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:01.647 01:51:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.647 01:51:47 -- common/autotest_common.sh@10 -- # set +x 00:17:01.905 [2024-04-15 01:51:47.295740] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:01.905 [2024-04-15 01:51:47.295828] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:01.905 EAL: No free 2048 kB hugepages reported on node 1 00:17:01.905 [2024-04-15 01:51:47.366027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:01.905 [2024-04-15 01:51:47.454181] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:01.905 [2024-04-15 01:51:47.454356] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:01.905 [2024-04-15 01:51:47.454377] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:01.905 [2024-04-15 01:51:47.454392] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:01.905 [2024-04-15 01:51:47.454476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.905 [2024-04-15 01:51:47.454534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.905 [2024-04-15 01:51:47.454537] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.841 01:51:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:02.841 01:51:48 -- common/autotest_common.sh@852 -- # return 0 00:17:02.841 01:51:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:02.841 01:51:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:02.841 01:51:48 -- common/autotest_common.sh@10 -- # set +x 00:17:02.841 01:51:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.841 01:51:48 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:02.841 [2024-04-15 01:51:48.454648] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:02.841 01:51:48 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.408 01:51:48 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:17:03.408 01:51:48 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:03.408 01:51:49 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:17:03.408 01:51:49 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:17:03.665 01:51:49 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:17:03.923 01:51:49 -- target/nvmf_lvol.sh@29 -- # lvs=936042b4-8fde-451b-8a35-425591c720f0 00:17:03.923 01:51:49 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 936042b4-8fde-451b-8a35-425591c720f0 lvol 20 00:17:04.181 01:51:49 -- target/nvmf_lvol.sh@32 -- # lvol=f08ae26b-cefd-45bf-8694-88354ccd5da5 00:17:04.181 01:51:49 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:04.440 01:51:50 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f08ae26b-cefd-45bf-8694-88354ccd5da5 00:17:04.698 01:51:50 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:04.956 [2024-04-15 01:51:50.482333] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.956 01:51:50 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.215 01:51:50 -- target/nvmf_lvol.sh@42 -- # perf_pid=2147982 00:17:05.215 01:51:50 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:17:05.215 01:51:50 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:17:05.215 EAL: No free 2048 kB hugepages reported on node 1 00:17:06.183 01:51:51 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f08ae26b-cefd-45bf-8694-88354ccd5da5 MY_SNAPSHOT 00:17:06.442 01:51:52 -- target/nvmf_lvol.sh@47 -- # snapshot=f01cba5b-42a8-4c96-9426-0f78de92d4eb 00:17:06.442 01:51:52 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f08ae26b-cefd-45bf-8694-88354ccd5da5 30 00:17:06.700 01:51:52 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone f01cba5b-42a8-4c96-9426-0f78de92d4eb MY_CLONE 00:17:06.957 01:51:52 -- target/nvmf_lvol.sh@49 -- # clone=877f32f5-4890-484a-8ee4-d9ae7c807675 00:17:06.957 01:51:52 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 877f32f5-4890-484a-8ee4-d9ae7c807675 00:17:07.522 01:51:52 -- target/nvmf_lvol.sh@53 -- # wait 2147982 00:17:15.628 Initializing NVMe Controllers 00:17:15.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:17:15.628 Controller IO queue size 128, less than required. 00:17:15.628 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:15.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:17:15.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:17:15.628 Initialization complete. Launching workers. 00:17:15.628 ======================================================== 00:17:15.628 Latency(us) 00:17:15.628 Device Information : IOPS MiB/s Average min max 00:17:15.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8738.70 34.14 14656.23 556.05 68183.94 00:17:15.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11781.60 46.02 10867.71 1846.96 70514.06 00:17:15.628 ======================================================== 00:17:15.628 Total : 20520.30 80.16 12481.08 556.05 70514.06 00:17:15.628 00:17:15.628 01:52:01 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:15.885 01:52:01 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f08ae26b-cefd-45bf-8694-88354ccd5da5 00:17:16.143 01:52:01 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 936042b4-8fde-451b-8a35-425591c720f0 00:17:16.402 01:52:01 -- target/nvmf_lvol.sh@60 -- # rm -f 00:17:16.402 01:52:01 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:17:16.402 01:52:01 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:17:16.402 01:52:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:16.402 01:52:01 -- nvmf/common.sh@116 -- # sync 00:17:16.402 01:52:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:16.402 01:52:01 -- nvmf/common.sh@119 -- # set +e 00:17:16.402 01:52:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:16.402 01:52:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:16.402 rmmod nvme_tcp 00:17:16.402 rmmod nvme_fabrics 00:17:16.402 rmmod nvme_keyring 00:17:16.402 01:52:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:16.402 01:52:01 -- nvmf/common.sh@123 -- # set -e 00:17:16.402 01:52:01 -- nvmf/common.sh@124 -- # return 0 00:17:16.402 01:52:01 -- nvmf/common.sh@477 -- # '[' -n 2147543 ']' 00:17:16.402 01:52:01 -- nvmf/common.sh@478 -- # killprocess 2147543 00:17:16.402 01:52:01 -- common/autotest_common.sh@926 -- # '[' -z 2147543 ']' 00:17:16.402 01:52:01 -- common/autotest_common.sh@930 -- # kill -0 2147543 00:17:16.402 01:52:01 -- common/autotest_common.sh@931 -- # uname 00:17:16.402 01:52:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.402 01:52:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2147543 00:17:16.402 01:52:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:16.402 01:52:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:16.402 01:52:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2147543' 00:17:16.402 killing process with pid 2147543 00:17:16.402 01:52:01 -- common/autotest_common.sh@945 -- # kill 2147543 00:17:16.402 01:52:01 -- common/autotest_common.sh@950 -- # wait 2147543 00:17:16.661 01:52:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:16.661 01:52:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:16.661 01:52:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:16.661 01:52:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:16.661 01:52:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:16.661 01:52:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:16.661 01:52:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:16.661 01:52:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.197 01:52:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:17:19.197 00:17:19.197 real 0m19.223s 00:17:19.197 user 1m2.936s 00:17:19.197 sys 0m6.446s 00:17:19.197 01:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:19.197 01:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:19.197 ************************************ 00:17:19.197 END TEST nvmf_lvol 00:17:19.197 ************************************ 00:17:19.197 01:52:04 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:19.197 01:52:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:19.197 01:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:19.197 01:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:19.197 ************************************ 00:17:19.197 START TEST nvmf_lvs_grow 00:17:19.197 ************************************ 00:17:19.197 01:52:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:19.197 * Looking for test storage... 00:17:19.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:19.197 01:52:04 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:19.197 01:52:04 -- nvmf/common.sh@7 -- # uname -s 00:17:19.197 01:52:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:19.197 01:52:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:19.197 01:52:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:19.197 01:52:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:19.197 01:52:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:19.197 01:52:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:19.197 01:52:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:19.197 01:52:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:19.197 01:52:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:19.197 01:52:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:19.197 01:52:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.197 01:52:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:19.197 01:52:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:19.197 01:52:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:19.197 01:52:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:19.197 01:52:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:19.197 01:52:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:19.197 01:52:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:19.197 01:52:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:19.197 01:52:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.197 01:52:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.198 01:52:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.198 01:52:04 -- paths/export.sh@5 -- # export PATH 00:17:19.198 01:52:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:19.198 01:52:04 -- nvmf/common.sh@46 -- # : 0 00:17:19.198 01:52:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:19.198 01:52:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:19.198 01:52:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:19.198 01:52:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:19.198 01:52:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:19.198 01:52:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:19.198 01:52:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:19.198 01:52:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:19.198 01:52:04 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:19.198 01:52:04 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:19.198 01:52:04 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:17:19.198 01:52:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:19.198 01:52:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:19.198 01:52:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:19.198 01:52:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:19.198 01:52:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:19.198 01:52:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:19.198 01:52:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:19.198 01:52:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:19.198 01:52:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:19.198 01:52:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:19.198 01:52:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:19.198 01:52:04 -- common/autotest_common.sh@10 -- # set +x 00:17:21.100 01:52:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:21.100 01:52:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:21.100 01:52:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:21.100 01:52:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:21.100 01:52:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:21.100 01:52:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:21.100 01:52:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:21.100 01:52:06 -- nvmf/common.sh@294 -- # net_devs=() 00:17:21.100 01:52:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:21.100 01:52:06 -- nvmf/common.sh@295 -- # e810=() 00:17:21.100 01:52:06 -- nvmf/common.sh@295 -- # local -ga e810 00:17:21.100 01:52:06 -- nvmf/common.sh@296 -- # x722=() 00:17:21.100 01:52:06 -- nvmf/common.sh@296 -- # local -ga x722 00:17:21.100 01:52:06 -- nvmf/common.sh@297 -- # mlx=() 00:17:21.100 01:52:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:21.100 01:52:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:21.100 01:52:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:21.100 01:52:06 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:17:21.100 01:52:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.100 01:52:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:21.100 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:21.100 01:52:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:21.100 01:52:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:21.100 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:21.100 01:52:06 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.100 01:52:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.100 01:52:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.100 01:52:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:21.100 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:21.100 01:52:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.100 01:52:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:21.100 01:52:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:21.100 01:52:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:21.100 01:52:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:21.100 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:21.100 01:52:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:21.100 01:52:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:21.100 01:52:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:17:21.100 01:52:06 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:17:21.100 01:52:06 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:21.100 01:52:06 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:21.100 01:52:06 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:21.100 01:52:06 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:17:21.100 01:52:06 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:21.100 01:52:06 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:21.100 01:52:06 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:17:21.100 01:52:06 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:21.100 01:52:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:21.100 01:52:06 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:17:21.100 01:52:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:17:21.100 01:52:06 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:17:21.100 01:52:06 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:21.100 01:52:06 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:21.100 01:52:06 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:21.100 01:52:06 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:17:21.100 01:52:06 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:21.101 01:52:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:21.101 01:52:06 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:21.101 01:52:06 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:17:21.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:21.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:17:21.101 00:17:21.101 --- 10.0.0.2 ping statistics --- 00:17:21.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.101 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:17:21.101 01:52:06 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:21.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:21.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:21.101 00:17:21.101 --- 10.0.0.1 ping statistics --- 00:17:21.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:21.101 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:21.101 01:52:06 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:21.101 01:52:06 -- nvmf/common.sh@410 -- # return 0 00:17:21.101 01:52:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:21.101 01:52:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:21.101 01:52:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:21.101 01:52:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:21.101 01:52:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:21.101 01:52:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:21.101 01:52:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:21.101 01:52:06 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:17:21.101 01:52:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:21.101 01:52:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:21.101 01:52:06 -- common/autotest_common.sh@10 -- # set +x 00:17:21.101 01:52:06 -- nvmf/common.sh@469 -- # nvmfpid=2151299 00:17:21.101 01:52:06 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:21.101 01:52:06 -- nvmf/common.sh@470 -- # waitforlisten 2151299 00:17:21.101 01:52:06 -- common/autotest_common.sh@819 -- # '[' -z 2151299 ']' 00:17:21.101 01:52:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:21.101 01:52:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:21.101 01:52:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:21.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:21.101 01:52:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:21.101 01:52:06 -- common/autotest_common.sh@10 -- # set +x 00:17:21.101 [2024-04-15 01:52:06.515953] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:21.101 [2024-04-15 01:52:06.516066] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.101 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.101 [2024-04-15 01:52:06.586246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.101 [2024-04-15 01:52:06.673823] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:21.101 [2024-04-15 01:52:06.673997] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.101 [2024-04-15 01:52:06.674017] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.101 [2024-04-15 01:52:06.674031] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.101 [2024-04-15 01:52:06.674089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.038 01:52:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:22.038 01:52:07 -- common/autotest_common.sh@852 -- # return 0 00:17:22.038 01:52:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:22.038 01:52:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:22.038 01:52:07 -- common/autotest_common.sh@10 -- # set +x 00:17:22.038 01:52:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:22.038 01:52:07 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:22.296 [2024-04-15 01:52:07.731203] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:22.296 01:52:07 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:17:22.296 01:52:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:22.296 01:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:22.296 01:52:07 -- common/autotest_common.sh@10 -- # set +x 00:17:22.296 ************************************ 00:17:22.296 START TEST lvs_grow_clean 00:17:22.296 ************************************ 00:17:22.296 01:52:07 -- common/autotest_common.sh@1104 -- # lvs_grow 00:17:22.296 01:52:07 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:22.297 01:52:07 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:22.555 01:52:08 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:22.555 01:52:08 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:22.814 01:52:08 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:22.814 01:52:08 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:22.814 01:52:08 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:23.072 01:52:08 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:23.072 01:52:08 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:23.072 01:52:08 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d3258cf4-5119-49c5-b713-7d5997a90ecd lvol 150 00:17:23.331 01:52:08 -- target/nvmf_lvs_grow.sh@33 -- # lvol=308ef21d-6bb4-42b3-aab3-891cf85a4333 00:17:23.331 01:52:08 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:23.331 01:52:08 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:23.590 [2024-04-15 01:52:09.003226] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:23.590 [2024-04-15 01:52:09.003302] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:23.590 true 00:17:23.590 01:52:09 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:23.590 01:52:09 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:23.848 01:52:09 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:23.848 01:52:09 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:24.107 01:52:09 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 308ef21d-6bb4-42b3-aab3-891cf85a4333 00:17:24.107 01:52:09 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:24.365 [2024-04-15 01:52:09.962262] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.365 01:52:09 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:24.624 01:52:10 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2151876 00:17:24.624 01:52:10 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:24.624 01:52:10 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2151876 /var/tmp/bdevperf.sock 00:17:24.624 01:52:10 -- common/autotest_common.sh@819 -- # '[' -z 2151876 ']' 00:17:24.624 01:52:10 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:24.624 01:52:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:24.624 01:52:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.624 01:52:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:24.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:24.624 01:52:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.624 01:52:10 -- common/autotest_common.sh@10 -- # set +x 00:17:24.883 [2024-04-15 01:52:10.290877] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:24.883 [2024-04-15 01:52:10.290994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2151876 ] 00:17:24.883 EAL: No free 2048 kB hugepages reported on node 1 00:17:24.883 [2024-04-15 01:52:10.352054] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.883 [2024-04-15 01:52:10.439280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.819 01:52:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:25.819 01:52:11 -- common/autotest_common.sh@852 -- # return 0 00:17:25.819 01:52:11 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:26.078 Nvme0n1 00:17:26.078 01:52:11 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:26.337 [ 00:17:26.337 { 00:17:26.337 "name": "Nvme0n1", 00:17:26.337 "aliases": [ 00:17:26.337 "308ef21d-6bb4-42b3-aab3-891cf85a4333" 00:17:26.337 ], 00:17:26.337 "product_name": "NVMe disk", 00:17:26.337 "block_size": 4096, 00:17:26.337 "num_blocks": 38912, 00:17:26.337 "uuid": "308ef21d-6bb4-42b3-aab3-891cf85a4333", 00:17:26.337 "assigned_rate_limits": { 00:17:26.337 "rw_ios_per_sec": 0, 00:17:26.337 "rw_mbytes_per_sec": 0, 00:17:26.337 "r_mbytes_per_sec": 0, 00:17:26.337 "w_mbytes_per_sec": 0 00:17:26.337 }, 00:17:26.337 "claimed": false, 00:17:26.337 "zoned": false, 00:17:26.337 "supported_io_types": { 00:17:26.337 "read": true, 00:17:26.337 "write": true, 00:17:26.337 "unmap": true, 00:17:26.337 "write_zeroes": true, 00:17:26.337 "flush": true, 00:17:26.337 "reset": true, 00:17:26.337 "compare": true, 00:17:26.337 "compare_and_write": true, 00:17:26.337 "abort": true, 00:17:26.337 "nvme_admin": true, 00:17:26.337 "nvme_io": true 00:17:26.337 }, 00:17:26.337 "driver_specific": { 00:17:26.337 "nvme": [ 00:17:26.337 { 00:17:26.337 "trid": { 00:17:26.337 "trtype": "TCP", 00:17:26.337 "adrfam": "IPv4", 00:17:26.337 "traddr": "10.0.0.2", 00:17:26.337 "trsvcid": "4420", 00:17:26.337 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:26.337 }, 00:17:26.337 "ctrlr_data": { 00:17:26.337 "cntlid": 1, 00:17:26.337 "vendor_id": "0x8086", 00:17:26.337 "model_number": "SPDK bdev Controller", 00:17:26.337 "serial_number": "SPDK0", 00:17:26.337 "firmware_revision": "24.01.1", 00:17:26.337 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:26.337 "oacs": { 00:17:26.337 "security": 0, 00:17:26.337 "format": 0, 00:17:26.337 "firmware": 0, 00:17:26.337 "ns_manage": 0 00:17:26.337 }, 00:17:26.337 "multi_ctrlr": true, 00:17:26.337 "ana_reporting": false 00:17:26.337 }, 00:17:26.337 "vs": { 00:17:26.337 "nvme_version": "1.3" 00:17:26.337 }, 00:17:26.337 "ns_data": { 00:17:26.337 "id": 1, 00:17:26.337 "can_share": true 00:17:26.337 } 00:17:26.337 } 00:17:26.337 ], 00:17:26.337 "mp_policy": "active_passive" 00:17:26.337 } 00:17:26.337 } 00:17:26.337 ] 00:17:26.337 01:52:11 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2152024 00:17:26.337 01:52:11 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:26.337 01:52:11 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:26.598 Running I/O for 10 seconds... 00:17:27.536 Latency(us) 00:17:27.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.536 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.536 Nvme0n1 : 1.00 14591.00 57.00 0.00 0.00 0.00 0.00 0.00 00:17:27.536 =================================================================================================================== 00:17:27.536 Total : 14591.00 57.00 0.00 0.00 0.00 0.00 0.00 00:17:27.536 00:17:28.476 01:52:13 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:28.476 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:28.476 Nvme0n1 : 2.00 14719.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:28.476 =================================================================================================================== 00:17:28.476 Total : 14719.50 57.50 0.00 0.00 0.00 0.00 0.00 00:17:28.476 00:17:28.735 true 00:17:28.735 01:52:14 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:28.735 01:52:14 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:28.994 01:52:14 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:28.994 01:52:14 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:28.994 01:52:14 -- target/nvmf_lvs_grow.sh@65 -- # wait 2152024 00:17:29.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.565 Nvme0n1 : 3.00 14805.00 57.83 0.00 0.00 0.00 0.00 0.00 00:17:29.565 =================================================================================================================== 00:17:29.565 Total : 14805.00 57.83 0.00 0.00 0.00 0.00 0.00 00:17:29.565 00:17:30.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.503 Nvme0n1 : 4.00 14879.75 58.12 0.00 0.00 0.00 0.00 0.00 00:17:30.503 =================================================================================================================== 00:17:30.503 Total : 14879.75 58.12 0.00 0.00 0.00 0.00 0.00 00:17:30.503 00:17:31.441 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.441 Nvme0n1 : 5.00 14963.20 58.45 0.00 0.00 0.00 0.00 0.00 00:17:31.441 =================================================================================================================== 00:17:31.441 Total : 14963.20 58.45 0.00 0.00 0.00 0.00 0.00 00:17:31.441 00:17:32.383 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.383 Nvme0n1 : 6.00 15039.83 58.75 0.00 0.00 0.00 0.00 0.00 00:17:32.383 =================================================================================================================== 00:17:32.383 Total : 15039.83 58.75 0.00 0.00 0.00 0.00 0.00 00:17:32.383 00:17:33.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.765 Nvme0n1 : 7.00 15113.14 59.04 0.00 0.00 0.00 0.00 0.00 00:17:33.765 =================================================================================================================== 00:17:33.765 Total : 15113.14 59.04 0.00 0.00 0.00 0.00 0.00 00:17:33.765 00:17:34.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.702 Nvme0n1 : 8.00 15144.00 59.16 0.00 0.00 0.00 0.00 0.00 00:17:34.702 =================================================================================================================== 00:17:34.702 Total : 15144.00 59.16 0.00 0.00 0.00 0.00 0.00 00:17:34.702 00:17:35.640 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:35.640 Nvme0n1 : 9.00 15160.78 59.22 0.00 0.00 0.00 0.00 0.00 00:17:35.640 =================================================================================================================== 00:17:35.640 Total : 15160.78 59.22 0.00 0.00 0.00 0.00 0.00 00:17:35.640 00:17:36.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.578 Nvme0n1 : 10.00 15187.20 59.33 0.00 0.00 0.00 0.00 0.00 00:17:36.578 =================================================================================================================== 00:17:36.578 Total : 15187.20 59.33 0.00 0.00 0.00 0.00 0.00 00:17:36.578 00:17:36.578 00:17:36.578 Latency(us) 00:17:36.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.578 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:36.578 Nvme0n1 : 10.01 15182.07 59.30 0.00 0.00 8425.02 6941.96 18447.17 00:17:36.578 =================================================================================================================== 00:17:36.578 Total : 15182.07 59.30 0.00 0.00 8425.02 6941.96 18447.17 00:17:36.578 0 00:17:36.578 01:52:22 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2151876 00:17:36.578 01:52:22 -- common/autotest_common.sh@926 -- # '[' -z 2151876 ']' 00:17:36.578 01:52:22 -- common/autotest_common.sh@930 -- # kill -0 2151876 00:17:36.578 01:52:22 -- common/autotest_common.sh@931 -- # uname 00:17:36.578 01:52:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:36.578 01:52:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2151876 00:17:36.578 01:52:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:36.578 01:52:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:36.578 01:52:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2151876' 00:17:36.578 killing process with pid 2151876 00:17:36.578 01:52:22 -- common/autotest_common.sh@945 -- # kill 2151876 00:17:36.578 Received shutdown signal, test time was about 10.000000 seconds 00:17:36.578 00:17:36.578 Latency(us) 00:17:36.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:36.578 =================================================================================================================== 00:17:36.578 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:36.578 01:52:22 -- common/autotest_common.sh@950 -- # wait 2151876 00:17:36.836 01:52:22 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:37.094 01:52:22 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:37.094 01:52:22 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:37.354 01:52:22 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:37.354 01:52:22 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:17:37.354 01:52:22 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:37.614 [2024-04-15 01:52:23.039816] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:37.614 01:52:23 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:37.614 01:52:23 -- common/autotest_common.sh@640 -- # local es=0 00:17:37.614 01:52:23 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:37.614 01:52:23 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.614 01:52:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.614 01:52:23 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.614 01:52:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.614 01:52:23 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.614 01:52:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:37.614 01:52:23 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.614 01:52:23 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:37.614 01:52:23 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:37.872 request: 00:17:37.872 { 00:17:37.872 "uuid": "d3258cf4-5119-49c5-b713-7d5997a90ecd", 00:17:37.872 "method": "bdev_lvol_get_lvstores", 00:17:37.872 "req_id": 1 00:17:37.872 } 00:17:37.872 Got JSON-RPC error response 00:17:37.872 response: 00:17:37.872 { 00:17:37.872 "code": -19, 00:17:37.872 "message": "No such device" 00:17:37.872 } 00:17:37.872 01:52:23 -- common/autotest_common.sh@643 -- # es=1 00:17:37.872 01:52:23 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:37.872 01:52:23 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:37.872 01:52:23 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:37.872 01:52:23 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:38.131 aio_bdev 00:17:38.131 01:52:23 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 308ef21d-6bb4-42b3-aab3-891cf85a4333 00:17:38.131 01:52:23 -- common/autotest_common.sh@887 -- # local bdev_name=308ef21d-6bb4-42b3-aab3-891cf85a4333 00:17:38.131 01:52:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:38.131 01:52:23 -- common/autotest_common.sh@889 -- # local i 00:17:38.131 01:52:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:38.131 01:52:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:38.131 01:52:23 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:38.391 01:52:23 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 308ef21d-6bb4-42b3-aab3-891cf85a4333 -t 2000 00:17:38.391 [ 00:17:38.391 { 00:17:38.391 "name": "308ef21d-6bb4-42b3-aab3-891cf85a4333", 00:17:38.391 "aliases": [ 00:17:38.391 "lvs/lvol" 00:17:38.391 ], 00:17:38.391 "product_name": "Logical Volume", 00:17:38.391 "block_size": 4096, 00:17:38.391 "num_blocks": 38912, 00:17:38.391 "uuid": "308ef21d-6bb4-42b3-aab3-891cf85a4333", 00:17:38.391 "assigned_rate_limits": { 00:17:38.391 "rw_ios_per_sec": 0, 00:17:38.391 "rw_mbytes_per_sec": 0, 00:17:38.391 "r_mbytes_per_sec": 0, 00:17:38.391 "w_mbytes_per_sec": 0 00:17:38.391 }, 00:17:38.391 "claimed": false, 00:17:38.391 "zoned": false, 00:17:38.391 "supported_io_types": { 00:17:38.391 "read": true, 00:17:38.391 "write": true, 00:17:38.391 "unmap": true, 00:17:38.391 "write_zeroes": true, 00:17:38.391 "flush": false, 00:17:38.391 "reset": true, 00:17:38.391 "compare": false, 00:17:38.391 "compare_and_write": false, 00:17:38.391 "abort": false, 00:17:38.391 "nvme_admin": false, 00:17:38.391 "nvme_io": false 00:17:38.391 }, 00:17:38.391 "driver_specific": { 00:17:38.391 "lvol": { 00:17:38.391 "lvol_store_uuid": "d3258cf4-5119-49c5-b713-7d5997a90ecd", 00:17:38.391 "base_bdev": "aio_bdev", 00:17:38.391 "thin_provision": false, 00:17:38.391 "snapshot": false, 00:17:38.391 "clone": false, 00:17:38.391 "esnap_clone": false 00:17:38.391 } 00:17:38.391 } 00:17:38.391 } 00:17:38.391 ] 00:17:38.391 01:52:24 -- common/autotest_common.sh@895 -- # return 0 00:17:38.391 01:52:24 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:38.391 01:52:24 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:38.650 01:52:24 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:38.650 01:52:24 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:38.650 01:52:24 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:38.908 01:52:24 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:38.908 01:52:24 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 308ef21d-6bb4-42b3-aab3-891cf85a4333 00:17:39.168 01:52:24 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d3258cf4-5119-49c5-b713-7d5997a90ecd 00:17:39.445 01:52:25 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.723 00:17:39.723 real 0m17.553s 00:17:39.723 user 0m11.973s 00:17:39.723 sys 0m3.674s 00:17:39.723 01:52:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.723 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:17:39.723 ************************************ 00:17:39.723 END TEST lvs_grow_clean 00:17:39.723 ************************************ 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:39.723 01:52:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:39.723 01:52:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:39.723 01:52:25 -- common/autotest_common.sh@10 -- # set +x 00:17:39.723 ************************************ 00:17:39.723 START TEST lvs_grow_dirty 00:17:39.723 ************************************ 00:17:39.723 01:52:25 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.723 01:52:25 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:39.981 01:52:25 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:39.982 01:52:25 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:40.242 01:52:25 -- target/nvmf_lvs_grow.sh@28 -- # lvs=f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:40.242 01:52:25 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:40.242 01:52:25 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:40.501 01:52:26 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:40.501 01:52:26 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:40.501 01:52:26 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d lvol 150 00:17:40.760 01:52:26 -- target/nvmf_lvs_grow.sh@33 -- # lvol=bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:40.761 01:52:26 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:40.761 01:52:26 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:41.021 [2024-04-15 01:52:26.545219] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:41.021 [2024-04-15 01:52:26.545302] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:41.021 true 00:17:41.021 01:52:26 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:41.021 01:52:26 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:41.279 01:52:26 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:41.279 01:52:26 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:41.538 01:52:27 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:41.797 01:52:27 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:42.055 01:52:27 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:42.312 01:52:27 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2153988 00:17:42.312 01:52:27 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:42.312 01:52:27 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:42.312 01:52:27 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2153988 /var/tmp/bdevperf.sock 00:17:42.312 01:52:27 -- common/autotest_common.sh@819 -- # '[' -z 2153988 ']' 00:17:42.312 01:52:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.312 01:52:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:42.312 01:52:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.312 01:52:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:42.312 01:52:27 -- common/autotest_common.sh@10 -- # set +x 00:17:42.312 [2024-04-15 01:52:27.818642] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:42.312 [2024-04-15 01:52:27.818709] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153988 ] 00:17:42.312 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.312 [2024-04-15 01:52:27.880844] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.570 [2024-04-15 01:52:27.970262] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:43.136 01:52:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:43.136 01:52:28 -- common/autotest_common.sh@852 -- # return 0 00:17:43.136 01:52:28 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:43.704 Nvme0n1 00:17:43.704 01:52:29 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:43.704 [ 00:17:43.704 { 00:17:43.704 "name": "Nvme0n1", 00:17:43.704 "aliases": [ 00:17:43.704 "bb2a28fb-94c7-4ef6-b117-8c91c910ddac" 00:17:43.704 ], 00:17:43.704 "product_name": "NVMe disk", 00:17:43.704 "block_size": 4096, 00:17:43.704 "num_blocks": 38912, 00:17:43.704 "uuid": "bb2a28fb-94c7-4ef6-b117-8c91c910ddac", 00:17:43.704 "assigned_rate_limits": { 00:17:43.704 "rw_ios_per_sec": 0, 00:17:43.704 "rw_mbytes_per_sec": 0, 00:17:43.704 "r_mbytes_per_sec": 0, 00:17:43.704 "w_mbytes_per_sec": 0 00:17:43.704 }, 00:17:43.704 "claimed": false, 00:17:43.704 "zoned": false, 00:17:43.704 "supported_io_types": { 00:17:43.704 "read": true, 00:17:43.704 "write": true, 00:17:43.704 "unmap": true, 00:17:43.704 "write_zeroes": true, 00:17:43.704 "flush": true, 00:17:43.704 "reset": true, 00:17:43.704 "compare": true, 00:17:43.704 "compare_and_write": true, 00:17:43.704 "abort": true, 00:17:43.704 "nvme_admin": true, 00:17:43.704 "nvme_io": true 00:17:43.704 }, 00:17:43.704 "driver_specific": { 00:17:43.704 "nvme": [ 00:17:43.704 { 00:17:43.704 "trid": { 00:17:43.704 "trtype": "TCP", 00:17:43.704 "adrfam": "IPv4", 00:17:43.704 "traddr": "10.0.0.2", 00:17:43.704 "trsvcid": "4420", 00:17:43.704 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:43.704 }, 00:17:43.704 "ctrlr_data": { 00:17:43.704 "cntlid": 1, 00:17:43.704 "vendor_id": "0x8086", 00:17:43.704 "model_number": "SPDK bdev Controller", 00:17:43.704 "serial_number": "SPDK0", 00:17:43.704 "firmware_revision": "24.01.1", 00:17:43.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:43.704 "oacs": { 00:17:43.704 "security": 0, 00:17:43.704 "format": 0, 00:17:43.704 "firmware": 0, 00:17:43.704 "ns_manage": 0 00:17:43.704 }, 00:17:43.704 "multi_ctrlr": true, 00:17:43.704 "ana_reporting": false 00:17:43.704 }, 00:17:43.704 "vs": { 00:17:43.704 "nvme_version": "1.3" 00:17:43.704 }, 00:17:43.704 "ns_data": { 00:17:43.704 "id": 1, 00:17:43.704 "can_share": true 00:17:43.704 } 00:17:43.704 } 00:17:43.704 ], 00:17:43.704 "mp_policy": "active_passive" 00:17:43.704 } 00:17:43.704 } 00:17:43.704 ] 00:17:43.704 01:52:29 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2154136 00:17:43.704 01:52:29 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:43.704 01:52:29 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:43.963 Running I/O for 10 seconds... 00:17:44.898 Latency(us) 00:17:44.898 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:44.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:44.898 Nvme0n1 : 1.00 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:17:44.898 =================================================================================================================== 00:17:44.898 Total : 14242.00 55.63 0.00 0.00 0.00 0.00 0.00 00:17:44.898 00:17:45.835 01:52:31 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:45.836 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:45.836 Nvme0n1 : 2.00 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:17:45.836 =================================================================================================================== 00:17:45.836 Total : 14456.00 56.47 0.00 0.00 0.00 0.00 0.00 00:17:45.836 00:17:46.095 true 00:17:46.095 01:52:31 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:46.095 01:52:31 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:46.354 01:52:31 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:46.354 01:52:31 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:46.354 01:52:31 -- target/nvmf_lvs_grow.sh@65 -- # wait 2154136 00:17:46.922 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:46.923 Nvme0n1 : 3.00 14586.67 56.98 0.00 0.00 0.00 0.00 0.00 00:17:46.923 =================================================================================================================== 00:17:46.923 Total : 14586.67 56.98 0.00 0.00 0.00 0.00 0.00 00:17:46.923 00:17:47.861 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:47.861 Nvme0n1 : 4.00 14728.25 57.53 0.00 0.00 0.00 0.00 0.00 00:17:47.861 =================================================================================================================== 00:17:47.861 Total : 14728.25 57.53 0.00 0.00 0.00 0.00 0.00 00:17:47.861 00:17:48.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:48.799 Nvme0n1 : 5.00 14768.00 57.69 0.00 0.00 0.00 0.00 0.00 00:17:48.799 =================================================================================================================== 00:17:48.799 Total : 14768.00 57.69 0.00 0.00 0.00 0.00 0.00 00:17:48.799 00:17:50.176 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:50.176 Nvme0n1 : 6.00 14802.67 57.82 0.00 0.00 0.00 0.00 0.00 00:17:50.176 =================================================================================================================== 00:17:50.176 Total : 14802.67 57.82 0.00 0.00 0.00 0.00 0.00 00:17:50.176 00:17:51.112 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:51.112 Nvme0n1 : 7.00 14834.43 57.95 0.00 0.00 0.00 0.00 0.00 00:17:51.112 =================================================================================================================== 00:17:51.112 Total : 14834.43 57.95 0.00 0.00 0.00 0.00 0.00 00:17:51.112 00:17:52.049 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.049 Nvme0n1 : 8.00 14860.12 58.05 0.00 0.00 0.00 0.00 0.00 00:17:52.049 =================================================================================================================== 00:17:52.049 Total : 14860.12 58.05 0.00 0.00 0.00 0.00 0.00 00:17:52.049 00:17:52.990 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:52.990 Nvme0n1 : 9.00 14880.11 58.13 0.00 0.00 0.00 0.00 0.00 00:17:52.990 =================================================================================================================== 00:17:52.990 Total : 14880.11 58.13 0.00 0.00 0.00 0.00 0.00 00:17:52.990 00:17:53.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.928 Nvme0n1 : 10.00 14902.50 58.21 0.00 0.00 0.00 0.00 0.00 00:17:53.928 =================================================================================================================== 00:17:53.928 Total : 14902.50 58.21 0.00 0.00 0.00 0.00 0.00 00:17:53.928 00:17:53.928 00:17:53.928 Latency(us) 00:17:53.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.928 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:53.928 Nvme0n1 : 10.01 14903.71 58.22 0.00 0.00 8582.68 2148.12 14660.65 00:17:53.928 =================================================================================================================== 00:17:53.928 Total : 14903.71 58.22 0.00 0.00 8582.68 2148.12 14660.65 00:17:53.928 0 00:17:53.928 01:52:39 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2153988 00:17:53.928 01:52:39 -- common/autotest_common.sh@926 -- # '[' -z 2153988 ']' 00:17:53.928 01:52:39 -- common/autotest_common.sh@930 -- # kill -0 2153988 00:17:53.928 01:52:39 -- common/autotest_common.sh@931 -- # uname 00:17:53.928 01:52:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:53.928 01:52:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2153988 00:17:53.928 01:52:39 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:53.928 01:52:39 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:53.928 01:52:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2153988' 00:17:53.928 killing process with pid 2153988 00:17:53.928 01:52:39 -- common/autotest_common.sh@945 -- # kill 2153988 00:17:53.928 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.928 00:17:53.928 Latency(us) 00:17:53.928 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.928 =================================================================================================================== 00:17:53.928 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.928 01:52:39 -- common/autotest_common.sh@950 -- # wait 2153988 00:17:54.218 01:52:39 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:54.477 01:52:39 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:54.477 01:52:39 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2151299 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@74 -- # wait 2151299 00:17:54.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2151299 Killed "${NVMF_APP[@]}" "$@" 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@74 -- # true 00:17:54.736 01:52:40 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:17:54.736 01:52:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:54.736 01:52:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:54.736 01:52:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.736 01:52:40 -- nvmf/common.sh@469 -- # nvmfpid=2155501 00:17:54.736 01:52:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:54.736 01:52:40 -- nvmf/common.sh@470 -- # waitforlisten 2155501 00:17:54.736 01:52:40 -- common/autotest_common.sh@819 -- # '[' -z 2155501 ']' 00:17:54.736 01:52:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:54.736 01:52:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:54.736 01:52:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:54.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:54.736 01:52:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:54.736 01:52:40 -- common/autotest_common.sh@10 -- # set +x 00:17:54.736 [2024-04-15 01:52:40.303547] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:17:54.736 [2024-04-15 01:52:40.303620] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:54.736 EAL: No free 2048 kB hugepages reported on node 1 00:17:54.736 [2024-04-15 01:52:40.371015] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:54.994 [2024-04-15 01:52:40.455425] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:54.994 [2024-04-15 01:52:40.455573] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:54.995 [2024-04-15 01:52:40.455590] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:54.995 [2024-04-15 01:52:40.455603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:54.995 [2024-04-15 01:52:40.455635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.932 01:52:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:55.932 01:52:41 -- common/autotest_common.sh@852 -- # return 0 00:17:55.932 01:52:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:55.932 01:52:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:55.932 01:52:41 -- common/autotest_common.sh@10 -- # set +x 00:17:55.932 01:52:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:55.932 01:52:41 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:55.932 [2024-04-15 01:52:41.575461] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:55.932 [2024-04-15 01:52:41.575597] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:55.932 [2024-04-15 01:52:41.575652] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:56.191 01:52:41 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:17:56.191 01:52:41 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:56.191 01:52:41 -- common/autotest_common.sh@887 -- # local bdev_name=bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:56.191 01:52:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:56.191 01:52:41 -- common/autotest_common.sh@889 -- # local i 00:17:56.191 01:52:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:56.191 01:52:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:56.191 01:52:41 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:56.450 01:52:41 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb2a28fb-94c7-4ef6-b117-8c91c910ddac -t 2000 00:17:56.450 [ 00:17:56.450 { 00:17:56.450 "name": "bb2a28fb-94c7-4ef6-b117-8c91c910ddac", 00:17:56.450 "aliases": [ 00:17:56.450 "lvs/lvol" 00:17:56.450 ], 00:17:56.450 "product_name": "Logical Volume", 00:17:56.450 "block_size": 4096, 00:17:56.450 "num_blocks": 38912, 00:17:56.450 "uuid": "bb2a28fb-94c7-4ef6-b117-8c91c910ddac", 00:17:56.450 "assigned_rate_limits": { 00:17:56.450 "rw_ios_per_sec": 0, 00:17:56.450 "rw_mbytes_per_sec": 0, 00:17:56.450 "r_mbytes_per_sec": 0, 00:17:56.450 "w_mbytes_per_sec": 0 00:17:56.450 }, 00:17:56.450 "claimed": false, 00:17:56.450 "zoned": false, 00:17:56.450 "supported_io_types": { 00:17:56.450 "read": true, 00:17:56.450 "write": true, 00:17:56.450 "unmap": true, 00:17:56.450 "write_zeroes": true, 00:17:56.450 "flush": false, 00:17:56.450 "reset": true, 00:17:56.450 "compare": false, 00:17:56.450 "compare_and_write": false, 00:17:56.450 "abort": false, 00:17:56.450 "nvme_admin": false, 00:17:56.450 "nvme_io": false 00:17:56.450 }, 00:17:56.450 "driver_specific": { 00:17:56.450 "lvol": { 00:17:56.450 "lvol_store_uuid": "f9beb6dc-3e4f-4ba0-80d4-282fce624a5d", 00:17:56.450 "base_bdev": "aio_bdev", 00:17:56.450 "thin_provision": false, 00:17:56.450 "snapshot": false, 00:17:56.450 "clone": false, 00:17:56.450 "esnap_clone": false 00:17:56.450 } 00:17:56.450 } 00:17:56.450 } 00:17:56.450 ] 00:17:56.450 01:52:42 -- common/autotest_common.sh@895 -- # return 0 00:17:56.450 01:52:42 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:56.450 01:52:42 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:17:56.708 01:52:42 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:17:56.708 01:52:42 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:56.708 01:52:42 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:17:56.966 01:52:42 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:17:56.966 01:52:42 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:57.226 [2024-04-15 01:52:42.796356] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:57.226 01:52:42 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:57.226 01:52:42 -- common/autotest_common.sh@640 -- # local es=0 00:17:57.226 01:52:42 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:57.226 01:52:42 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.226 01:52:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.226 01:52:42 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.226 01:52:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.226 01:52:42 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.226 01:52:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:57.226 01:52:42 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:57.226 01:52:42 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:57.226 01:52:42 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:57.484 request: 00:17:57.484 { 00:17:57.484 "uuid": "f9beb6dc-3e4f-4ba0-80d4-282fce624a5d", 00:17:57.484 "method": "bdev_lvol_get_lvstores", 00:17:57.484 "req_id": 1 00:17:57.484 } 00:17:57.484 Got JSON-RPC error response 00:17:57.484 response: 00:17:57.484 { 00:17:57.484 "code": -19, 00:17:57.484 "message": "No such device" 00:17:57.484 } 00:17:57.484 01:52:43 -- common/autotest_common.sh@643 -- # es=1 00:17:57.484 01:52:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:57.484 01:52:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:57.484 01:52:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:57.484 01:52:43 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:57.742 aio_bdev 00:17:57.742 01:52:43 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:57.742 01:52:43 -- common/autotest_common.sh@887 -- # local bdev_name=bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:57.742 01:52:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:17:57.742 01:52:43 -- common/autotest_common.sh@889 -- # local i 00:17:57.742 01:52:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:17:57.742 01:52:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:17:57.742 01:52:43 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.000 01:52:43 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b bb2a28fb-94c7-4ef6-b117-8c91c910ddac -t 2000 00:17:58.260 [ 00:17:58.260 { 00:17:58.260 "name": "bb2a28fb-94c7-4ef6-b117-8c91c910ddac", 00:17:58.260 "aliases": [ 00:17:58.260 "lvs/lvol" 00:17:58.260 ], 00:17:58.260 "product_name": "Logical Volume", 00:17:58.260 "block_size": 4096, 00:17:58.260 "num_blocks": 38912, 00:17:58.260 "uuid": "bb2a28fb-94c7-4ef6-b117-8c91c910ddac", 00:17:58.260 "assigned_rate_limits": { 00:17:58.260 "rw_ios_per_sec": 0, 00:17:58.260 "rw_mbytes_per_sec": 0, 00:17:58.260 "r_mbytes_per_sec": 0, 00:17:58.260 "w_mbytes_per_sec": 0 00:17:58.260 }, 00:17:58.260 "claimed": false, 00:17:58.260 "zoned": false, 00:17:58.260 "supported_io_types": { 00:17:58.260 "read": true, 00:17:58.260 "write": true, 00:17:58.260 "unmap": true, 00:17:58.260 "write_zeroes": true, 00:17:58.260 "flush": false, 00:17:58.260 "reset": true, 00:17:58.260 "compare": false, 00:17:58.260 "compare_and_write": false, 00:17:58.260 "abort": false, 00:17:58.260 "nvme_admin": false, 00:17:58.260 "nvme_io": false 00:17:58.260 }, 00:17:58.260 "driver_specific": { 00:17:58.260 "lvol": { 00:17:58.260 "lvol_store_uuid": "f9beb6dc-3e4f-4ba0-80d4-282fce624a5d", 00:17:58.260 "base_bdev": "aio_bdev", 00:17:58.260 "thin_provision": false, 00:17:58.260 "snapshot": false, 00:17:58.260 "clone": false, 00:17:58.260 "esnap_clone": false 00:17:58.260 } 00:17:58.260 } 00:17:58.260 } 00:17:58.261 ] 00:17:58.261 01:52:43 -- common/autotest_common.sh@895 -- # return 0 00:17:58.261 01:52:43 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:58.261 01:52:43 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:17:58.519 01:52:44 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:17:58.519 01:52:44 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:58.519 01:52:44 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:17:58.778 01:52:44 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:17:58.778 01:52:44 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete bb2a28fb-94c7-4ef6-b117-8c91c910ddac 00:17:59.037 01:52:44 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f9beb6dc-3e4f-4ba0-80d4-282fce624a5d 00:17:59.295 01:52:44 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:59.555 01:52:45 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:59.555 00:17:59.555 real 0m19.728s 00:17:59.555 user 0m49.384s 00:17:59.555 sys 0m4.806s 00:17:59.555 01:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.555 01:52:45 -- common/autotest_common.sh@10 -- # set +x 00:17:59.555 ************************************ 00:17:59.555 END TEST lvs_grow_dirty 00:17:59.555 ************************************ 00:17:59.555 01:52:45 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:59.555 01:52:45 -- common/autotest_common.sh@796 -- # type=--id 00:17:59.555 01:52:45 -- common/autotest_common.sh@797 -- # id=0 00:17:59.555 01:52:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:59.555 01:52:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:59.555 01:52:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:59.555 01:52:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:59.555 01:52:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:59.555 01:52:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:59.555 nvmf_trace.0 00:17:59.555 01:52:45 -- common/autotest_common.sh@811 -- # return 0 00:17:59.555 01:52:45 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:59.555 01:52:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:59.555 01:52:45 -- nvmf/common.sh@116 -- # sync 00:17:59.555 01:52:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.555 01:52:45 -- nvmf/common.sh@119 -- # set +e 00:17:59.555 01:52:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.555 01:52:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.555 rmmod nvme_tcp 00:17:59.555 rmmod nvme_fabrics 00:17:59.555 rmmod nvme_keyring 00:17:59.555 01:52:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.555 01:52:45 -- nvmf/common.sh@123 -- # set -e 00:17:59.555 01:52:45 -- nvmf/common.sh@124 -- # return 0 00:17:59.555 01:52:45 -- nvmf/common.sh@477 -- # '[' -n 2155501 ']' 00:17:59.555 01:52:45 -- nvmf/common.sh@478 -- # killprocess 2155501 00:17:59.555 01:52:45 -- common/autotest_common.sh@926 -- # '[' -z 2155501 ']' 00:17:59.555 01:52:45 -- common/autotest_common.sh@930 -- # kill -0 2155501 00:17:59.555 01:52:45 -- common/autotest_common.sh@931 -- # uname 00:17:59.555 01:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.555 01:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2155501 00:17:59.555 01:52:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.555 01:52:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.555 01:52:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2155501' 00:17:59.555 killing process with pid 2155501 00:17:59.555 01:52:45 -- common/autotest_common.sh@945 -- # kill 2155501 00:17:59.555 01:52:45 -- common/autotest_common.sh@950 -- # wait 2155501 00:17:59.814 01:52:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.814 01:52:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.814 01:52:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.814 01:52:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.814 01:52:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.814 01:52:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.814 01:52:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.814 01:52:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.357 01:52:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:02.357 00:18:02.357 real 0m43.125s 00:18:02.357 user 1m7.740s 00:18:02.357 sys 0m10.291s 00:18:02.357 01:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:02.357 01:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:02.357 ************************************ 00:18:02.357 END TEST nvmf_lvs_grow 00:18:02.357 ************************************ 00:18:02.357 01:52:47 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:02.357 01:52:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:02.357 01:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:02.357 01:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:02.357 ************************************ 00:18:02.357 START TEST nvmf_bdev_io_wait 00:18:02.357 ************************************ 00:18:02.357 01:52:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:18:02.357 * Looking for test storage... 00:18:02.357 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:02.357 01:52:47 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.357 01:52:47 -- nvmf/common.sh@7 -- # uname -s 00:18:02.357 01:52:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:02.357 01:52:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:02.357 01:52:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:02.357 01:52:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:02.357 01:52:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:02.357 01:52:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:02.357 01:52:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:02.357 01:52:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:02.357 01:52:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:02.357 01:52:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:02.357 01:52:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.357 01:52:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:02.357 01:52:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:02.357 01:52:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:02.357 01:52:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:02.357 01:52:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:02.357 01:52:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:02.357 01:52:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:02.357 01:52:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:02.357 01:52:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.357 01:52:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.358 01:52:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.358 01:52:47 -- paths/export.sh@5 -- # export PATH 00:18:02.358 01:52:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:02.358 01:52:47 -- nvmf/common.sh@46 -- # : 0 00:18:02.358 01:52:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:02.358 01:52:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:02.358 01:52:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:02.358 01:52:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:02.358 01:52:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:02.358 01:52:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:02.358 01:52:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:02.358 01:52:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:02.358 01:52:47 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:02.358 01:52:47 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:02.358 01:52:47 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:18:02.358 01:52:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:02.358 01:52:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:02.358 01:52:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:02.358 01:52:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:02.358 01:52:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:02.358 01:52:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:02.358 01:52:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:02.358 01:52:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:02.358 01:52:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:02.358 01:52:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:02.358 01:52:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:02.358 01:52:47 -- common/autotest_common.sh@10 -- # set +x 00:18:04.261 01:52:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:04.261 01:52:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:04.261 01:52:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:04.261 01:52:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:04.261 01:52:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:04.261 01:52:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:04.261 01:52:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:04.261 01:52:49 -- nvmf/common.sh@294 -- # net_devs=() 00:18:04.261 01:52:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:04.261 01:52:49 -- nvmf/common.sh@295 -- # e810=() 00:18:04.261 01:52:49 -- nvmf/common.sh@295 -- # local -ga e810 00:18:04.261 01:52:49 -- nvmf/common.sh@296 -- # x722=() 00:18:04.261 01:52:49 -- nvmf/common.sh@296 -- # local -ga x722 00:18:04.261 01:52:49 -- nvmf/common.sh@297 -- # mlx=() 00:18:04.261 01:52:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:04.261 01:52:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:04.261 01:52:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:04.261 01:52:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:04.261 01:52:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:04.261 01:52:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:04.261 01:52:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:04.261 01:52:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:04.261 01:52:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.261 01:52:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:04.261 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:04.261 01:52:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.261 01:52:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:04.262 01:52:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:04.262 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:04.262 01:52:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:04.262 01:52:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.262 01:52:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.262 01:52:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.262 01:52:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.262 01:52:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:04.262 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:04.262 01:52:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.262 01:52:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:04.262 01:52:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:04.262 01:52:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:04.262 01:52:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:04.262 01:52:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:04.262 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:04.262 01:52:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:04.262 01:52:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:04.262 01:52:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:04.262 01:52:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:04.262 01:52:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:04.262 01:52:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:04.262 01:52:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:04.262 01:52:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:04.262 01:52:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:04.262 01:52:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:04.262 01:52:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:04.262 01:52:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:04.262 01:52:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:04.262 01:52:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:04.262 01:52:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:04.262 01:52:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:04.262 01:52:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:04.262 01:52:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:04.262 01:52:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:04.262 01:52:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:04.262 01:52:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:04.262 01:52:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:04.262 01:52:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:04.262 01:52:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:04.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:04.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:18:04.262 00:18:04.262 --- 10.0.0.2 ping statistics --- 00:18:04.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.262 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:18:04.262 01:52:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:04.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:04.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:18:04.262 00:18:04.262 --- 10.0.0.1 ping statistics --- 00:18:04.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:04.262 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:18:04.262 01:52:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:04.262 01:52:49 -- nvmf/common.sh@410 -- # return 0 00:18:04.262 01:52:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:04.262 01:52:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:04.262 01:52:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:04.262 01:52:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:04.262 01:52:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:04.262 01:52:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:04.262 01:52:49 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:18:04.262 01:52:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:04.262 01:52:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:04.262 01:52:49 -- common/autotest_common.sh@10 -- # set +x 00:18:04.262 01:52:49 -- nvmf/common.sh@469 -- # nvmfpid=2158067 00:18:04.262 01:52:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:18:04.262 01:52:49 -- nvmf/common.sh@470 -- # waitforlisten 2158067 00:18:04.262 01:52:49 -- common/autotest_common.sh@819 -- # '[' -z 2158067 ']' 00:18:04.262 01:52:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.262 01:52:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:04.262 01:52:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.262 01:52:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:04.262 01:52:49 -- common/autotest_common.sh@10 -- # set +x 00:18:04.262 [2024-04-15 01:52:49.794854] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:04.262 [2024-04-15 01:52:49.794929] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.262 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.262 [2024-04-15 01:52:49.867498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.521 [2024-04-15 01:52:49.962031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:04.521 [2024-04-15 01:52:49.962204] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.521 [2024-04-15 01:52:49.962224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.521 [2024-04-15 01:52:49.962238] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.521 [2024-04-15 01:52:49.965072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.521 [2024-04-15 01:52:49.965125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:04.521 [2024-04-15 01:52:49.965228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.521 [2024-04-15 01:52:49.965231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.521 01:52:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:04.521 01:52:50 -- common/autotest_common.sh@852 -- # return 0 00:18:04.521 01:52:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:04.521 01:52:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:04.521 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.521 01:52:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.521 01:52:50 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:18:04.521 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.521 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.521 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.521 01:52:50 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:18:04.521 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.521 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.521 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.521 01:52:50 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:04.521 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.521 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.521 [2024-04-15 01:52:50.135664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.521 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.521 01:52:50 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:04.521 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.521 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 Malloc0 00:18:04.780 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.780 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.780 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.780 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.780 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.780 01:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:04.780 01:52:50 -- common/autotest_common.sh@10 -- # set +x 00:18:04.780 [2024-04-15 01:52:50.201769] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.780 01:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2158214 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@30 -- # READ_PID=2158215 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # config=() 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2158218 00:18:04.780 01:52:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:18:04.780 01:52:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:04.780 { 00:18:04.780 "params": { 00:18:04.780 "name": "Nvme$subsystem", 00:18:04.780 "trtype": "$TEST_TRANSPORT", 00:18:04.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.780 "adrfam": "ipv4", 00:18:04.780 "trsvcid": "$NVMF_PORT", 00:18:04.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.780 "hdgst": ${hdgst:-false}, 00:18:04.780 "ddgst": ${ddgst:-false} 00:18:04.780 }, 00:18:04.780 "method": "bdev_nvme_attach_controller" 00:18:04.780 } 00:18:04.780 EOF 00:18:04.780 )") 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # config=() 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:04.780 01:52:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:04.780 01:52:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:04.780 { 00:18:04.780 "params": { 00:18:04.780 "name": "Nvme$subsystem", 00:18:04.780 "trtype": "$TEST_TRANSPORT", 00:18:04.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.780 "adrfam": "ipv4", 00:18:04.780 "trsvcid": "$NVMF_PORT", 00:18:04.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.780 "hdgst": ${hdgst:-false}, 00:18:04.780 "ddgst": ${ddgst:-false} 00:18:04.780 }, 00:18:04.780 "method": "bdev_nvme_attach_controller" 00:18:04.780 } 00:18:04.780 EOF 00:18:04.780 )") 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2158220 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@35 -- # sync 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # config=() 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:18:04.780 01:52:50 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:18:04.780 01:52:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:04.780 01:52:50 -- nvmf/common.sh@542 -- # cat 00:18:04.780 01:52:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:04.780 { 00:18:04.780 "params": { 00:18:04.780 "name": "Nvme$subsystem", 00:18:04.780 "trtype": "$TEST_TRANSPORT", 00:18:04.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.780 "adrfam": "ipv4", 00:18:04.780 "trsvcid": "$NVMF_PORT", 00:18:04.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.780 "hdgst": ${hdgst:-false}, 00:18:04.780 "ddgst": ${ddgst:-false} 00:18:04.780 }, 00:18:04.780 "method": "bdev_nvme_attach_controller" 00:18:04.780 } 00:18:04.780 EOF 00:18:04.780 )") 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # config=() 00:18:04.780 01:52:50 -- nvmf/common.sh@520 -- # local subsystem config 00:18:04.780 01:52:50 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:04.781 01:52:50 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:04.781 { 00:18:04.781 "params": { 00:18:04.781 "name": "Nvme$subsystem", 00:18:04.781 "trtype": "$TEST_TRANSPORT", 00:18:04.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.781 "adrfam": "ipv4", 00:18:04.781 "trsvcid": "$NVMF_PORT", 00:18:04.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.781 "hdgst": ${hdgst:-false}, 00:18:04.781 "ddgst": ${ddgst:-false} 00:18:04.781 }, 00:18:04.781 "method": "bdev_nvme_attach_controller" 00:18:04.781 } 00:18:04.781 EOF 00:18:04.781 )") 00:18:04.781 01:52:50 -- nvmf/common.sh@542 -- # cat 00:18:04.781 01:52:50 -- nvmf/common.sh@542 -- # cat 00:18:04.781 01:52:50 -- target/bdev_io_wait.sh@37 -- # wait 2158214 00:18:04.781 01:52:50 -- nvmf/common.sh@542 -- # cat 00:18:04.781 01:52:50 -- nvmf/common.sh@544 -- # jq . 00:18:04.781 01:52:50 -- nvmf/common.sh@544 -- # jq . 00:18:04.781 01:52:50 -- nvmf/common.sh@544 -- # jq . 00:18:04.781 01:52:50 -- nvmf/common.sh@544 -- # jq . 00:18:04.781 01:52:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:04.781 01:52:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:04.781 "params": { 00:18:04.781 "name": "Nvme1", 00:18:04.781 "trtype": "tcp", 00:18:04.781 "traddr": "10.0.0.2", 00:18:04.781 "adrfam": "ipv4", 00:18:04.781 "trsvcid": "4420", 00:18:04.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.781 "hdgst": false, 00:18:04.781 "ddgst": false 00:18:04.781 }, 00:18:04.781 "method": "bdev_nvme_attach_controller" 00:18:04.781 }' 00:18:04.781 01:52:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:04.781 01:52:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:04.781 "params": { 00:18:04.781 "name": "Nvme1", 00:18:04.781 "trtype": "tcp", 00:18:04.781 "traddr": "10.0.0.2", 00:18:04.781 "adrfam": "ipv4", 00:18:04.781 "trsvcid": "4420", 00:18:04.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.781 "hdgst": false, 00:18:04.781 "ddgst": false 00:18:04.781 }, 00:18:04.781 "method": "bdev_nvme_attach_controller" 00:18:04.781 }' 00:18:04.781 01:52:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:04.781 01:52:50 -- nvmf/common.sh@545 -- # IFS=, 00:18:04.781 01:52:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:04.781 "params": { 00:18:04.781 "name": "Nvme1", 00:18:04.781 "trtype": "tcp", 00:18:04.781 "traddr": "10.0.0.2", 00:18:04.781 "adrfam": "ipv4", 00:18:04.781 "trsvcid": "4420", 00:18:04.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.781 "hdgst": false, 00:18:04.781 "ddgst": false 00:18:04.781 }, 00:18:04.781 "method": "bdev_nvme_attach_controller" 00:18:04.781 }' 00:18:04.781 01:52:50 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:04.781 "params": { 00:18:04.781 "name": "Nvme1", 00:18:04.781 "trtype": "tcp", 00:18:04.781 "traddr": "10.0.0.2", 00:18:04.781 "adrfam": "ipv4", 00:18:04.781 "trsvcid": "4420", 00:18:04.781 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.781 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.781 "hdgst": false, 00:18:04.781 "ddgst": false 00:18:04.781 }, 00:18:04.781 "method": "bdev_nvme_attach_controller" 00:18:04.781 }' 00:18:04.781 [2024-04-15 01:52:50.246734] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:04.781 [2024-04-15 01:52:50.246805] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:18:04.781 [2024-04-15 01:52:50.246819] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:04.781 [2024-04-15 01:52:50.246820] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:04.781 [2024-04-15 01:52:50.246821] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:04.781 [2024-04-15 01:52:50.246904] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-04-15 01:52:50.246904] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:18:04.781 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:18:04.781 [2024-04-15 01:52:50.246908] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:18:04.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.781 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.781 [2024-04-15 01:52:50.423869] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.041 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.041 [2024-04-15 01:52:50.498393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:05.041 [2024-04-15 01:52:50.524379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.041 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.041 [2024-04-15 01:52:50.597923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:05.041 [2024-04-15 01:52:50.623871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.299 [2024-04-15 01:52:50.697514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.299 [2024-04-15 01:52:50.700901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:05.299 [2024-04-15 01:52:50.765156] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:18:05.299 Running I/O for 1 seconds... 00:18:05.299 Running I/O for 1 seconds... 00:18:05.299 Running I/O for 1 seconds... 00:18:05.299 Running I/O for 1 seconds... 00:18:06.236 00:18:06.236 Latency(us) 00:18:06.236 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.236 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:18:06.236 Nvme1n1 : 1.01 9731.03 38.01 0.00 0.00 13131.56 4369.07 23690.05 00:18:06.236 =================================================================================================================== 00:18:06.236 Total : 9731.03 38.01 0.00 0.00 13131.56 4369.07 23690.05 00:18:06.494 00:18:06.494 Latency(us) 00:18:06.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.494 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:18:06.494 Nvme1n1 : 1.00 192219.45 750.86 0.00 0.00 663.38 257.90 892.02 00:18:06.494 =================================================================================================================== 00:18:06.494 Total : 192219.45 750.86 0.00 0.00 663.38 257.90 892.02 00:18:06.494 00:18:06.494 Latency(us) 00:18:06.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.494 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:18:06.494 Nvme1n1 : 1.01 9568.82 37.38 0.00 0.00 13322.92 5000.15 24563.86 00:18:06.494 =================================================================================================================== 00:18:06.494 Total : 9568.82 37.38 0.00 0.00 13322.92 5000.15 24563.86 00:18:06.494 00:18:06.494 Latency(us) 00:18:06.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.494 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:18:06.494 Nvme1n1 : 1.02 2862.98 11.18 0.00 0.00 44320.41 8155.59 61361.11 00:18:06.494 =================================================================================================================== 00:18:06.494 Total : 2862.98 11.18 0.00 0.00 44320.41 8155.59 61361.11 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@38 -- # wait 2158215 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@39 -- # wait 2158218 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@40 -- # wait 2158220 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.761 01:52:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:06.761 01:52:52 -- common/autotest_common.sh@10 -- # set +x 00:18:06.761 01:52:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:18:06.761 01:52:52 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:18:06.761 01:52:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:06.761 01:52:52 -- nvmf/common.sh@116 -- # sync 00:18:06.761 01:52:52 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:06.761 01:52:52 -- nvmf/common.sh@119 -- # set +e 00:18:06.761 01:52:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:06.761 01:52:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:06.761 rmmod nvme_tcp 00:18:06.761 rmmod nvme_fabrics 00:18:06.761 rmmod nvme_keyring 00:18:06.761 01:52:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:06.761 01:52:52 -- nvmf/common.sh@123 -- # set -e 00:18:06.761 01:52:52 -- nvmf/common.sh@124 -- # return 0 00:18:06.761 01:52:52 -- nvmf/common.sh@477 -- # '[' -n 2158067 ']' 00:18:06.761 01:52:52 -- nvmf/common.sh@478 -- # killprocess 2158067 00:18:06.761 01:52:52 -- common/autotest_common.sh@926 -- # '[' -z 2158067 ']' 00:18:06.761 01:52:52 -- common/autotest_common.sh@930 -- # kill -0 2158067 00:18:06.761 01:52:52 -- common/autotest_common.sh@931 -- # uname 00:18:06.761 01:52:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:06.761 01:52:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2158067 00:18:06.761 01:52:52 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:06.761 01:52:52 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:06.761 01:52:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2158067' 00:18:06.761 killing process with pid 2158067 00:18:06.761 01:52:52 -- common/autotest_common.sh@945 -- # kill 2158067 00:18:06.761 01:52:52 -- common/autotest_common.sh@950 -- # wait 2158067 00:18:07.019 01:52:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:07.019 01:52:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:07.019 01:52:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:07.019 01:52:52 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.019 01:52:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:07.019 01:52:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.019 01:52:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.019 01:52:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.586 01:52:54 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:09.586 00:18:09.586 real 0m7.107s 00:18:09.586 user 0m15.552s 00:18:09.586 sys 0m3.386s 00:18:09.586 01:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.586 01:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:09.586 ************************************ 00:18:09.586 END TEST nvmf_bdev_io_wait 00:18:09.586 ************************************ 00:18:09.586 01:52:54 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.586 01:52:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:09.586 01:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:09.586 01:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:09.586 ************************************ 00:18:09.586 START TEST nvmf_queue_depth 00:18:09.586 ************************************ 00:18:09.586 01:52:54 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:18:09.586 * Looking for test storage... 00:18:09.586 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.586 01:52:54 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.586 01:52:54 -- nvmf/common.sh@7 -- # uname -s 00:18:09.586 01:52:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.586 01:52:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.586 01:52:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.586 01:52:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.586 01:52:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.586 01:52:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.586 01:52:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.586 01:52:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.586 01:52:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.586 01:52:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.586 01:52:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.586 01:52:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.586 01:52:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.586 01:52:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.586 01:52:54 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.586 01:52:54 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.586 01:52:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.586 01:52:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.586 01:52:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.586 01:52:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.586 01:52:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.586 01:52:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.586 01:52:54 -- paths/export.sh@5 -- # export PATH 00:18:09.586 01:52:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.586 01:52:54 -- nvmf/common.sh@46 -- # : 0 00:18:09.586 01:52:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:09.586 01:52:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:09.586 01:52:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:09.586 01:52:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.586 01:52:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.586 01:52:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:09.586 01:52:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:09.586 01:52:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:09.586 01:52:54 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:18:09.586 01:52:54 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:18:09.586 01:52:54 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:09.586 01:52:54 -- target/queue_depth.sh@19 -- # nvmftestinit 00:18:09.586 01:52:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:09.586 01:52:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.586 01:52:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:09.586 01:52:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:09.586 01:52:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:09.586 01:52:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.586 01:52:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.586 01:52:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.586 01:52:54 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:09.586 01:52:54 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:09.586 01:52:54 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:09.587 01:52:54 -- common/autotest_common.sh@10 -- # set +x 00:18:10.965 01:52:56 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:10.965 01:52:56 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:10.965 01:52:56 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:10.965 01:52:56 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:10.965 01:52:56 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:10.965 01:52:56 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:10.965 01:52:56 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:10.965 01:52:56 -- nvmf/common.sh@294 -- # net_devs=() 00:18:10.965 01:52:56 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:10.965 01:52:56 -- nvmf/common.sh@295 -- # e810=() 00:18:10.965 01:52:56 -- nvmf/common.sh@295 -- # local -ga e810 00:18:10.965 01:52:56 -- nvmf/common.sh@296 -- # x722=() 00:18:10.965 01:52:56 -- nvmf/common.sh@296 -- # local -ga x722 00:18:10.965 01:52:56 -- nvmf/common.sh@297 -- # mlx=() 00:18:10.965 01:52:56 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:10.965 01:52:56 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.965 01:52:56 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:10.965 01:52:56 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:10.965 01:52:56 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:10.965 01:52:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.965 01:52:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:10.965 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:10.965 01:52:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:10.965 01:52:56 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:10.965 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:10.965 01:52:56 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:10.965 01:52:56 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:10.965 01:52:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.965 01:52:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.965 01:52:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.965 01:52:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.965 01:52:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:10.965 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:10.966 01:52:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.966 01:52:56 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:10.966 01:52:56 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.966 01:52:56 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:10.966 01:52:56 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.966 01:52:56 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:10.966 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:10.966 01:52:56 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.966 01:52:56 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:10.966 01:52:56 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:10.966 01:52:56 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:10.966 01:52:56 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:10.966 01:52:56 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:10.966 01:52:56 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.966 01:52:56 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.966 01:52:56 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.966 01:52:56 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:10.966 01:52:56 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.966 01:52:56 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.966 01:52:56 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:10.966 01:52:56 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.966 01:52:56 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.966 01:52:56 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:10.966 01:52:56 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:10.966 01:52:56 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.966 01:52:56 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.225 01:52:56 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.225 01:52:56 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.225 01:52:56 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:11.225 01:52:56 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.225 01:52:56 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.225 01:52:56 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.225 01:52:56 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:11.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.227 ms 00:18:11.225 00:18:11.225 --- 10.0.0.2 ping statistics --- 00:18:11.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.225 rtt min/avg/max/mdev = 0.227/0.227/0.227/0.000 ms 00:18:11.225 01:52:56 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:18:11.225 00:18:11.225 --- 10.0.0.1 ping statistics --- 00:18:11.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.225 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:18:11.225 01:52:56 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.225 01:52:56 -- nvmf/common.sh@410 -- # return 0 00:18:11.225 01:52:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:11.225 01:52:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.225 01:52:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:11.225 01:52:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:11.225 01:52:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.225 01:52:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:11.225 01:52:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:11.225 01:52:56 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:18:11.225 01:52:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:11.225 01:52:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:11.225 01:52:56 -- common/autotest_common.sh@10 -- # set +x 00:18:11.225 01:52:56 -- nvmf/common.sh@469 -- # nvmfpid=2160424 00:18:11.225 01:52:56 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.225 01:52:56 -- nvmf/common.sh@470 -- # waitforlisten 2160424 00:18:11.225 01:52:56 -- common/autotest_common.sh@819 -- # '[' -z 2160424 ']' 00:18:11.225 01:52:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.225 01:52:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:11.225 01:52:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.225 01:52:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:11.225 01:52:56 -- common/autotest_common.sh@10 -- # set +x 00:18:11.225 [2024-04-15 01:52:56.789266] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:11.225 [2024-04-15 01:52:56.789364] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.225 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.225 [2024-04-15 01:52:56.852184] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.484 [2024-04-15 01:52:56.933513] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:11.484 [2024-04-15 01:52:56.933657] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.484 [2024-04-15 01:52:56.933674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.484 [2024-04-15 01:52:56.933686] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.484 [2024-04-15 01:52:56.933711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.420 01:52:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:12.420 01:52:57 -- common/autotest_common.sh@852 -- # return 0 00:18:12.420 01:52:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:12.420 01:52:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 01:52:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.420 01:52:57 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:12.420 01:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 [2024-04-15 01:52:57.788515] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.420 01:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.420 01:52:57 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:12.420 01:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 Malloc0 00:18:12.420 01:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.420 01:52:57 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:12.420 01:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 01:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.420 01:52:57 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:12.420 01:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 01:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.420 01:52:57 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.420 01:52:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:12.420 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.420 [2024-04-15 01:52:57.851936] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.420 01:52:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:12.420 01:52:57 -- target/queue_depth.sh@30 -- # bdevperf_pid=2160552 00:18:12.420 01:52:57 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:18:12.420 01:52:57 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:12.420 01:52:57 -- target/queue_depth.sh@33 -- # waitforlisten 2160552 /var/tmp/bdevperf.sock 00:18:12.420 01:52:57 -- common/autotest_common.sh@819 -- # '[' -z 2160552 ']' 00:18:12.420 01:52:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.420 01:52:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:12.421 01:52:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.421 01:52:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:12.421 01:52:57 -- common/autotest_common.sh@10 -- # set +x 00:18:12.421 [2024-04-15 01:52:57.894632] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:12.421 [2024-04-15 01:52:57.894699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2160552 ] 00:18:12.421 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.421 [2024-04-15 01:52:57.958997] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.421 [2024-04-15 01:52:58.049238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.361 01:52:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:13.361 01:52:58 -- common/autotest_common.sh@852 -- # return 0 00:18:13.361 01:52:58 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:18:13.361 01:52:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:13.361 01:52:58 -- common/autotest_common.sh@10 -- # set +x 00:18:13.619 NVMe0n1 00:18:13.619 01:52:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:13.619 01:52:59 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:13.879 Running I/O for 10 seconds... 00:18:23.861 00:18:23.861 Latency(us) 00:18:23.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.861 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:23.861 Verification LBA range: start 0x0 length 0x4000 00:18:23.861 NVMe0n1 : 10.07 13378.58 52.26 0.00 0.00 76262.87 15340.28 61361.11 00:18:23.861 =================================================================================================================== 00:18:23.861 Total : 13378.58 52.26 0.00 0.00 76262.87 15340.28 61361.11 00:18:23.861 0 00:18:23.861 01:53:09 -- target/queue_depth.sh@39 -- # killprocess 2160552 00:18:23.861 01:53:09 -- common/autotest_common.sh@926 -- # '[' -z 2160552 ']' 00:18:23.861 01:53:09 -- common/autotest_common.sh@930 -- # kill -0 2160552 00:18:23.861 01:53:09 -- common/autotest_common.sh@931 -- # uname 00:18:23.861 01:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:23.861 01:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2160552 00:18:23.861 01:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:23.861 01:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:23.861 01:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2160552' 00:18:23.861 killing process with pid 2160552 00:18:23.861 01:53:09 -- common/autotest_common.sh@945 -- # kill 2160552 00:18:23.861 Received shutdown signal, test time was about 10.000000 seconds 00:18:23.861 00:18:23.861 Latency(us) 00:18:23.861 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.861 =================================================================================================================== 00:18:23.861 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:23.861 01:53:09 -- common/autotest_common.sh@950 -- # wait 2160552 00:18:24.120 01:53:09 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:24.120 01:53:09 -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:24.120 01:53:09 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:24.120 01:53:09 -- nvmf/common.sh@116 -- # sync 00:18:24.120 01:53:09 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:24.120 01:53:09 -- nvmf/common.sh@119 -- # set +e 00:18:24.120 01:53:09 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:24.120 01:53:09 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:24.120 rmmod nvme_tcp 00:18:24.120 rmmod nvme_fabrics 00:18:24.120 rmmod nvme_keyring 00:18:24.120 01:53:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:24.120 01:53:09 -- nvmf/common.sh@123 -- # set -e 00:18:24.120 01:53:09 -- nvmf/common.sh@124 -- # return 0 00:18:24.120 01:53:09 -- nvmf/common.sh@477 -- # '[' -n 2160424 ']' 00:18:24.120 01:53:09 -- nvmf/common.sh@478 -- # killprocess 2160424 00:18:24.120 01:53:09 -- common/autotest_common.sh@926 -- # '[' -z 2160424 ']' 00:18:24.120 01:53:09 -- common/autotest_common.sh@930 -- # kill -0 2160424 00:18:24.120 01:53:09 -- common/autotest_common.sh@931 -- # uname 00:18:24.120 01:53:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:24.120 01:53:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2160424 00:18:24.120 01:53:09 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:24.120 01:53:09 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:24.120 01:53:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2160424' 00:18:24.120 killing process with pid 2160424 00:18:24.120 01:53:09 -- common/autotest_common.sh@945 -- # kill 2160424 00:18:24.120 01:53:09 -- common/autotest_common.sh@950 -- # wait 2160424 00:18:24.378 01:53:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:24.378 01:53:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:24.378 01:53:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:24.378 01:53:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:24.378 01:53:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:24.378 01:53:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:24.378 01:53:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:24.379 01:53:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.913 01:53:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:26.913 00:18:26.913 real 0m17.418s 00:18:26.913 user 0m25.271s 00:18:26.913 sys 0m3.082s 00:18:26.913 01:53:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:26.913 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:18:26.913 ************************************ 00:18:26.913 END TEST nvmf_queue_depth 00:18:26.913 ************************************ 00:18:26.913 01:53:12 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:26.913 01:53:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:26.913 01:53:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:26.913 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:18:26.913 ************************************ 00:18:26.913 START TEST nvmf_multipath 00:18:26.913 ************************************ 00:18:26.913 01:53:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:26.913 * Looking for test storage... 00:18:26.913 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:26.913 01:53:12 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:26.913 01:53:12 -- nvmf/common.sh@7 -- # uname -s 00:18:26.913 01:53:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:26.913 01:53:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:26.913 01:53:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:26.913 01:53:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:26.913 01:53:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:26.913 01:53:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:26.913 01:53:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:26.913 01:53:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:26.913 01:53:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:26.913 01:53:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:26.913 01:53:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.913 01:53:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:26.913 01:53:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:26.913 01:53:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:26.913 01:53:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:26.913 01:53:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:26.913 01:53:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:26.913 01:53:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:26.913 01:53:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:26.913 01:53:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.913 01:53:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.913 01:53:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.913 01:53:12 -- paths/export.sh@5 -- # export PATH 00:18:26.913 01:53:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:26.913 01:53:12 -- nvmf/common.sh@46 -- # : 0 00:18:26.913 01:53:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:26.913 01:53:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:26.913 01:53:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:26.913 01:53:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:26.913 01:53:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:26.913 01:53:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:26.913 01:53:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:26.913 01:53:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:26.913 01:53:12 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:26.913 01:53:12 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:26.913 01:53:12 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:26.913 01:53:12 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:26.913 01:53:12 -- target/multipath.sh@43 -- # nvmftestinit 00:18:26.913 01:53:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:26.913 01:53:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:26.913 01:53:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:26.913 01:53:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:26.913 01:53:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:26.913 01:53:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:26.913 01:53:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:26.913 01:53:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:26.913 01:53:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:26.913 01:53:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:26.913 01:53:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:26.913 01:53:12 -- common/autotest_common.sh@10 -- # set +x 00:18:28.823 01:53:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:28.823 01:53:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:28.823 01:53:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:28.823 01:53:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:28.823 01:53:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:28.823 01:53:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:28.823 01:53:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:28.823 01:53:14 -- nvmf/common.sh@294 -- # net_devs=() 00:18:28.823 01:53:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:28.823 01:53:14 -- nvmf/common.sh@295 -- # e810=() 00:18:28.823 01:53:14 -- nvmf/common.sh@295 -- # local -ga e810 00:18:28.823 01:53:14 -- nvmf/common.sh@296 -- # x722=() 00:18:28.823 01:53:14 -- nvmf/common.sh@296 -- # local -ga x722 00:18:28.823 01:53:14 -- nvmf/common.sh@297 -- # mlx=() 00:18:28.823 01:53:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:28.823 01:53:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:28.823 01:53:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.823 01:53:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:28.823 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:28.823 01:53:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:28.823 01:53:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:28.823 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:28.823 01:53:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.823 01:53:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.823 01:53:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.823 01:53:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:28.823 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:28.823 01:53:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:28.823 01:53:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:28.823 01:53:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:28.823 01:53:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:28.823 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:28.823 01:53:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:28.823 01:53:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:28.823 01:53:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:28.823 01:53:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:28.823 01:53:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:28.823 01:53:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:28.823 01:53:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:28.823 01:53:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:28.823 01:53:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:28.823 01:53:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:28.823 01:53:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:28.823 01:53:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:28.823 01:53:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:28.823 01:53:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:28.823 01:53:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:28.823 01:53:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:28.823 01:53:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:28.823 01:53:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:28.823 01:53:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:28.823 01:53:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:28.823 01:53:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:28.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:28.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:18:28.823 00:18:28.823 --- 10.0.0.2 ping statistics --- 00:18:28.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.823 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:18:28.823 01:53:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:28.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:28.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.245 ms 00:18:28.823 00:18:28.823 --- 10.0.0.1 ping statistics --- 00:18:28.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:28.823 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:18:28.823 01:53:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:28.823 01:53:14 -- nvmf/common.sh@410 -- # return 0 00:18:28.823 01:53:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:28.823 01:53:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:28.823 01:53:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:28.823 01:53:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:28.823 01:53:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:28.823 01:53:14 -- target/multipath.sh@45 -- # '[' -z ']' 00:18:28.823 01:53:14 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:28.823 only one NIC for nvmf test 00:18:28.823 01:53:14 -- target/multipath.sh@47 -- # nvmftestfini 00:18:28.823 01:53:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:28.823 01:53:14 -- nvmf/common.sh@116 -- # sync 00:18:28.823 01:53:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:28.823 01:53:14 -- nvmf/common.sh@119 -- # set +e 00:18:28.823 01:53:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:28.823 01:53:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:28.823 rmmod nvme_tcp 00:18:28.823 rmmod nvme_fabrics 00:18:28.823 rmmod nvme_keyring 00:18:28.823 01:53:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:28.823 01:53:14 -- nvmf/common.sh@123 -- # set -e 00:18:28.823 01:53:14 -- nvmf/common.sh@124 -- # return 0 00:18:28.823 01:53:14 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:28.823 01:53:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:28.823 01:53:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:28.823 01:53:14 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:28.823 01:53:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:28.823 01:53:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.823 01:53:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:28.823 01:53:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.733 01:53:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.733 01:53:16 -- target/multipath.sh@48 -- # exit 0 00:18:30.733 01:53:16 -- target/multipath.sh@1 -- # nvmftestfini 00:18:30.733 01:53:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:30.733 01:53:16 -- nvmf/common.sh@116 -- # sync 00:18:30.733 01:53:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:30.733 01:53:16 -- nvmf/common.sh@119 -- # set +e 00:18:30.733 01:53:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:30.733 01:53:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:30.733 01:53:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:30.733 01:53:16 -- nvmf/common.sh@123 -- # set -e 00:18:30.733 01:53:16 -- nvmf/common.sh@124 -- # return 0 00:18:30.733 01:53:16 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:18:30.733 01:53:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:30.733 01:53:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:30.733 01:53:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:30.733 01:53:16 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.733 01:53:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:30.733 01:53:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.733 01:53:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.733 01:53:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.733 01:53:16 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:30.733 00:18:30.733 real 0m4.259s 00:18:30.733 user 0m0.813s 00:18:30.733 sys 0m1.422s 00:18:30.733 01:53:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.733 01:53:16 -- common/autotest_common.sh@10 -- # set +x 00:18:30.734 ************************************ 00:18:30.734 END TEST nvmf_multipath 00:18:30.734 ************************************ 00:18:30.734 01:53:16 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:30.734 01:53:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:30.734 01:53:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:30.734 01:53:16 -- common/autotest_common.sh@10 -- # set +x 00:18:30.734 ************************************ 00:18:30.734 START TEST nvmf_zcopy 00:18:30.734 ************************************ 00:18:30.734 01:53:16 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:30.993 * Looking for test storage... 00:18:30.993 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:30.993 01:53:16 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:30.993 01:53:16 -- nvmf/common.sh@7 -- # uname -s 00:18:30.993 01:53:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:30.993 01:53:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:30.993 01:53:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:30.993 01:53:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:30.993 01:53:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:30.993 01:53:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:30.993 01:53:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:30.993 01:53:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:30.993 01:53:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:30.993 01:53:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:30.993 01:53:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.993 01:53:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:30.993 01:53:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:30.993 01:53:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:30.993 01:53:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:30.993 01:53:16 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:30.993 01:53:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:30.993 01:53:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:30.993 01:53:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:30.993 01:53:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.993 01:53:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.993 01:53:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.993 01:53:16 -- paths/export.sh@5 -- # export PATH 00:18:30.993 01:53:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:30.993 01:53:16 -- nvmf/common.sh@46 -- # : 0 00:18:30.993 01:53:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:30.993 01:53:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:30.993 01:53:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:30.993 01:53:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:30.993 01:53:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:30.993 01:53:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:30.993 01:53:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:30.993 01:53:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:30.993 01:53:16 -- target/zcopy.sh@12 -- # nvmftestinit 00:18:30.993 01:53:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:30.993 01:53:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:30.993 01:53:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:30.993 01:53:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:30.993 01:53:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:30.993 01:53:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.993 01:53:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:30.993 01:53:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:30.993 01:53:16 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:30.993 01:53:16 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:30.993 01:53:16 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:30.993 01:53:16 -- common/autotest_common.sh@10 -- # set +x 00:18:32.930 01:53:18 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:32.930 01:53:18 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:32.930 01:53:18 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:32.930 01:53:18 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:32.930 01:53:18 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:32.930 01:53:18 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:32.930 01:53:18 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:32.930 01:53:18 -- nvmf/common.sh@294 -- # net_devs=() 00:18:32.930 01:53:18 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:32.930 01:53:18 -- nvmf/common.sh@295 -- # e810=() 00:18:32.930 01:53:18 -- nvmf/common.sh@295 -- # local -ga e810 00:18:32.930 01:53:18 -- nvmf/common.sh@296 -- # x722=() 00:18:32.930 01:53:18 -- nvmf/common.sh@296 -- # local -ga x722 00:18:32.930 01:53:18 -- nvmf/common.sh@297 -- # mlx=() 00:18:32.930 01:53:18 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:32.930 01:53:18 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:32.930 01:53:18 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.930 01:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:32.930 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:32.930 01:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:32.930 01:53:18 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:32.930 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:32.930 01:53:18 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.930 01:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.930 01:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.930 01:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:32.930 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:32.930 01:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:32.930 01:53:18 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:32.930 01:53:18 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:32.930 01:53:18 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:32.930 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:32.930 01:53:18 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:32.930 01:53:18 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:18:32.930 01:53:18 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:32.930 01:53:18 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:32.930 01:53:18 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:18:32.930 01:53:18 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:32.930 01:53:18 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:32.930 01:53:18 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:18:32.930 01:53:18 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:32.930 01:53:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:32.930 01:53:18 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:18:32.930 01:53:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:18:32.930 01:53:18 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:18:32.930 01:53:18 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:32.930 01:53:18 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:32.930 01:53:18 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:32.930 01:53:18 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:18:32.930 01:53:18 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:32.930 01:53:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:32.930 01:53:18 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:32.930 01:53:18 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:18:32.930 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:32.930 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:18:32.930 00:18:32.930 --- 10.0.0.2 ping statistics --- 00:18:32.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.930 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:18:32.930 01:53:18 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:32.930 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:32.930 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:18:32.930 00:18:32.930 --- 10.0.0.1 ping statistics --- 00:18:32.930 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:32.930 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:18:32.930 01:53:18 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:32.930 01:53:18 -- nvmf/common.sh@410 -- # return 0 00:18:32.930 01:53:18 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:32.930 01:53:18 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:32.930 01:53:18 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:32.930 01:53:18 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:32.930 01:53:18 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:32.930 01:53:18 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:32.930 01:53:18 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:32.930 01:53:18 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:32.930 01:53:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:32.930 01:53:18 -- common/autotest_common.sh@10 -- # set +x 00:18:32.930 01:53:18 -- nvmf/common.sh@469 -- # nvmfpid=2165857 00:18:32.930 01:53:18 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:32.930 01:53:18 -- nvmf/common.sh@470 -- # waitforlisten 2165857 00:18:32.930 01:53:18 -- common/autotest_common.sh@819 -- # '[' -z 2165857 ']' 00:18:32.930 01:53:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:32.930 01:53:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.930 01:53:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:32.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:32.930 01:53:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.930 01:53:18 -- common/autotest_common.sh@10 -- # set +x 00:18:32.930 [2024-04-15 01:53:18.465592] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:32.931 [2024-04-15 01:53:18.465666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.931 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.931 [2024-04-15 01:53:18.534637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.191 [2024-04-15 01:53:18.633989] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:33.191 [2024-04-15 01:53:18.634163] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.191 [2024-04-15 01:53:18.634185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.191 [2024-04-15 01:53:18.634199] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.191 [2024-04-15 01:53:18.634231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.129 01:53:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:34.129 01:53:19 -- common/autotest_common.sh@852 -- # return 0 00:18:34.129 01:53:19 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:34.129 01:53:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 01:53:19 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.129 01:53:19 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:34.129 01:53:19 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 [2024-04-15 01:53:19.494170] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 [2024-04-15 01:53:19.510328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 malloc0 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:34.129 01:53:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:34.129 01:53:19 -- common/autotest_common.sh@10 -- # set +x 00:18:34.129 01:53:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:34.129 01:53:19 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:34.129 01:53:19 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:34.129 01:53:19 -- nvmf/common.sh@520 -- # config=() 00:18:34.129 01:53:19 -- nvmf/common.sh@520 -- # local subsystem config 00:18:34.129 01:53:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:34.129 01:53:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:34.129 { 00:18:34.129 "params": { 00:18:34.129 "name": "Nvme$subsystem", 00:18:34.129 "trtype": "$TEST_TRANSPORT", 00:18:34.129 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.129 "adrfam": "ipv4", 00:18:34.129 "trsvcid": "$NVMF_PORT", 00:18:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.129 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.129 "hdgst": ${hdgst:-false}, 00:18:34.129 "ddgst": ${ddgst:-false} 00:18:34.129 }, 00:18:34.129 "method": "bdev_nvme_attach_controller" 00:18:34.129 } 00:18:34.129 EOF 00:18:34.129 )") 00:18:34.129 01:53:19 -- nvmf/common.sh@542 -- # cat 00:18:34.129 01:53:19 -- nvmf/common.sh@544 -- # jq . 00:18:34.129 01:53:19 -- nvmf/common.sh@545 -- # IFS=, 00:18:34.129 01:53:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:34.129 "params": { 00:18:34.129 "name": "Nvme1", 00:18:34.129 "trtype": "tcp", 00:18:34.129 "traddr": "10.0.0.2", 00:18:34.129 "adrfam": "ipv4", 00:18:34.129 "trsvcid": "4420", 00:18:34.129 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.129 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.129 "hdgst": false, 00:18:34.129 "ddgst": false 00:18:34.129 }, 00:18:34.129 "method": "bdev_nvme_attach_controller" 00:18:34.129 }' 00:18:34.129 [2024-04-15 01:53:19.585861] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:34.129 [2024-04-15 01:53:19.585952] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2166017 ] 00:18:34.129 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.129 [2024-04-15 01:53:19.651836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.129 [2024-04-15 01:53:19.744967] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.699 Running I/O for 10 seconds... 00:18:44.683 00:18:44.683 Latency(us) 00:18:44.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:44.683 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:44.683 Verification LBA range: start 0x0 length 0x1000 00:18:44.683 Nvme1n1 : 10.01 8965.81 70.05 0.00 0.00 14241.43 2051.03 25631.86 00:18:44.683 =================================================================================================================== 00:18:44.683 Total : 8965.81 70.05 0.00 0.00 14241.43 2051.03 25631.86 00:18:44.683 01:53:30 -- target/zcopy.sh@39 -- # perfpid=2167242 00:18:44.683 01:53:30 -- target/zcopy.sh@41 -- # xtrace_disable 00:18:44.683 01:53:30 -- common/autotest_common.sh@10 -- # set +x 00:18:44.683 01:53:30 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:44.683 01:53:30 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:44.683 01:53:30 -- nvmf/common.sh@520 -- # config=() 00:18:44.683 01:53:30 -- nvmf/common.sh@520 -- # local subsystem config 00:18:44.683 01:53:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:44.683 01:53:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:44.683 { 00:18:44.683 "params": { 00:18:44.683 "name": "Nvme$subsystem", 00:18:44.683 "trtype": "$TEST_TRANSPORT", 00:18:44.683 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:44.683 "adrfam": "ipv4", 00:18:44.683 "trsvcid": "$NVMF_PORT", 00:18:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:44.683 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:44.683 "hdgst": ${hdgst:-false}, 00:18:44.683 "ddgst": ${ddgst:-false} 00:18:44.683 }, 00:18:44.683 "method": "bdev_nvme_attach_controller" 00:18:44.683 } 00:18:44.683 EOF 00:18:44.683 )") 00:18:44.683 01:53:30 -- nvmf/common.sh@542 -- # cat 00:18:44.683 [2024-04-15 01:53:30.319271] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.683 [2024-04-15 01:53:30.319316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.683 01:53:30 -- nvmf/common.sh@544 -- # jq . 00:18:44.683 01:53:30 -- nvmf/common.sh@545 -- # IFS=, 00:18:44.683 01:53:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:44.683 "params": { 00:18:44.683 "name": "Nvme1", 00:18:44.683 "trtype": "tcp", 00:18:44.683 "traddr": "10.0.0.2", 00:18:44.683 "adrfam": "ipv4", 00:18:44.683 "trsvcid": "4420", 00:18:44.683 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:44.683 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:44.683 "hdgst": false, 00:18:44.683 "ddgst": false 00:18:44.683 }, 00:18:44.683 "method": "bdev_nvme_attach_controller" 00:18:44.683 }' 00:18:44.683 [2024-04-15 01:53:30.327225] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.683 [2024-04-15 01:53:30.327250] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.335248] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.335272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.343269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.343291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.351296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.351318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.353865] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:18:44.943 [2024-04-15 01:53:30.353925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2167242 ] 00:18:44.943 [2024-04-15 01:53:30.359318] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.359355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.367352] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.367373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.375378] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.375398] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.383418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.383444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 EAL: No free 2048 kB hugepages reported on node 1 00:18:44.943 [2024-04-15 01:53:30.391428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.391453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.399460] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.399484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.407482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.407508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.415507] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.415531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.419743] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.943 [2024-04-15 01:53:30.423532] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.423558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.431584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.431621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.439578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.439605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.447596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.447621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.455617] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.455642] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.463640] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.463665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.471669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.471697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.479717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.479758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.487708] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.487736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.495729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.495755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.503748] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.503775] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.511770] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.511795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.943 [2024-04-15 01:53:30.513894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.943 [2024-04-15 01:53:30.519793] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.943 [2024-04-15 01:53:30.519818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.527823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.527850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.535874] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.535916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.543897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.543937] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.551917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.551957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.559939] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.559980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.567962] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.568004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.575988] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.576027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:44.944 [2024-04-15 01:53:30.583980] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:44.944 [2024-04-15 01:53:30.584008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.592026] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.592095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.600068] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.600122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.608053] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.608079] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.616092] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.616115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.624126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.624154] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.632138] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.632161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.640154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.640177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.648170] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.648192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.656190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.656221] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.664211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.664231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.672235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.672257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.680257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.680278] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.688404] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.688432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.696412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.696440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.704424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.704450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.712444] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.712469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.720467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.720492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.728488] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.728513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.736514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.736540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.744538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.744565] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.752559] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.752583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.760579] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.760604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.768604] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.768629] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.776629] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.776653] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.784659] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.784686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.792674] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.792698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.800695] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.202 [2024-04-15 01:53:30.800719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.202 [2024-04-15 01:53:30.808718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.808742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.203 [2024-04-15 01:53:30.816741] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.816766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.203 [2024-04-15 01:53:30.824767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.824793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.203 [2024-04-15 01:53:30.832786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.832812] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.203 [2024-04-15 01:53:30.840814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.840844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.203 [2024-04-15 01:53:30.848831] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.203 [2024-04-15 01:53:30.848857] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 Running I/O for 5 seconds... 00:18:45.463 [2024-04-15 01:53:30.856853] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.856878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.871944] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.871977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.884193] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.884225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.895274] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.895302] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.906752] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.906784] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.919118] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.919144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.929677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.929706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.940620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.940648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.951224] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.951251] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.961133] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.961161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.975106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.975134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.985373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.985401] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:30.998385] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:30.998413] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.008485] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.008512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.019269] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.019298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.029561] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.029607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.039919] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.039945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.049229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.049257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.060423] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.060450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.071432] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.071473] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.083419] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.083449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.095180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.095216] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.463 [2024-04-15 01:53:31.106078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.463 [2024-04-15 01:53:31.106117] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.116639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.116670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.125921] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.125948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.135994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.136022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.146398] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.146425] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.156482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.156507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.166965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.166992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.177380] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.177410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.187500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.187526] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.197858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.197883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.207309] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.207354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.216593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.216620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.229220] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.229258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.238888] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.238917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.249787] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.249814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.260412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.260438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.270338] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.270366] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.280284] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.280311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.290896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.290923] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.301773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.301800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.312014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.312062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.322150] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.322178] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.332578] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.332607] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.344693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.344720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.355174] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.355202] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.724 [2024-04-15 01:53:31.365680] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.724 [2024-04-15 01:53:31.365708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.379006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.379035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.388997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.389025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.401060] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.401088] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.412743] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.412768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.422614] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.422640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.433512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.433545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.443917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.443947] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.453593] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.453621] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.465694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.465722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.475014] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.475064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.486641] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.486668] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.498135] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.498163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.508476] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.508504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.519643] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.519670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.530377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.530407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.539912] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.539938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.549190] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.549218] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.561544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.561572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.572661] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.572691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.582277] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.582305] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.592786] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.592816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.603017] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.603055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.612406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.612432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:45.985 [2024-04-15 01:53:31.621969] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:45.985 [2024-04-15 01:53:31.621996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.633519] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.633552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.643767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.643794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.654484] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.654511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.663834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.663862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.676737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.676764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.687078] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.687116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.696202] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.696230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.706633] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.706660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.716549] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.716576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.726605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.726648] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.738373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.738414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.748577] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.748604] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.760381] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.760409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.770669] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.770696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.780982] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.781009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.790315] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.790342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.801597] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.801624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.811873] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.811899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.821077] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.821120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.245 [2024-04-15 01:53:31.831765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.245 [2024-04-15 01:53:31.831792] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.246 [2024-04-15 01:53:31.841490] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.246 [2024-04-15 01:53:31.841516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.246 [2024-04-15 01:53:31.851845] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.246 [2024-04-15 01:53:31.851872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.246 [2024-04-15 01:53:31.863025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.246 [2024-04-15 01:53:31.863074] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.246 [2024-04-15 01:53:31.873742] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.246 [2024-04-15 01:53:31.873773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.246 [2024-04-15 01:53:31.885090] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.246 [2024-04-15 01:53:31.885119] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.505 [2024-04-15 01:53:31.895882] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.505 [2024-04-15 01:53:31.895914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.905261] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.905289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.915738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.915766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.926954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.926982] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.936801] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.936829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.947353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.947381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.958299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.958342] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.967789] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.967816] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.978727] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.978753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.990148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.990177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:31.999809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:31.999841] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.009173] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.009201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.019644] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.019671] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.031767] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.031795] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.042890] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.042918] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.051937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.051963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.063678] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.063705] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.073765] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.073793] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.084497] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.084525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.094896] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.094922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.104529] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.104557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.116599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.116626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.126568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.126610] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.137438] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.137465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.506 [2024-04-15 01:53:32.149817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.506 [2024-04-15 01:53:32.149845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.160234] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.160263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.172590] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.172617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.183373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.183416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.196469] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.196496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.206686] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.206714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.218188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.218215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.229185] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.229212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.242180] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.242208] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.252551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.252576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.263270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.263298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.275948] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.275975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.285552] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.285580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.295544] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.295572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.306860] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.306886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.316475] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.316516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.326158] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.326185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.335632] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.335673] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.345834] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.345860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.354894] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.354921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.366941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.366969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.377213] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.377241] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.386812] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.386839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.397557] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.397583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:46.766 [2024-04-15 01:53:32.405877] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:46.766 [2024-04-15 01:53:32.405903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.416482] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.416511] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.427681] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.427708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.442542] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.442569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.453618] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.453644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.464895] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.464921] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.473892] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.473919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.485623] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.485650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.494926] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.494953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.505319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.505361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.515285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.515314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.525494] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.525521] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.536005] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.536052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.547421] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.547448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.560208] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.560236] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.569285] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.569312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.581418] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.581446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.593310] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.593352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.602937] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.602964] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.615377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.615405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.625560] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.625587] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.635863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.635899] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.645342] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.645370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.656392] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.656419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.026 [2024-04-15 01:53:32.667025] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.026 [2024-04-15 01:53:32.667063] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.676427] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.676456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.686062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.686090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.697125] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.697152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.707635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.707665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.716913] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.716941] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.284 [2024-04-15 01:53:32.726940] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.284 [2024-04-15 01:53:32.726968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.737975] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.738002] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.747550] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.747576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.758161] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.758189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.768254] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.768282] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.781514] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.781542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.791571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.791597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.802058] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.802085] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.811705] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.811733] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.821950] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.821977] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.833259] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.833292] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.841858] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.841885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.852908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.852936] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.865856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.865887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.877855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.877882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.888113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.888141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.897792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.897818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.907179] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.907207] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.285 [2024-04-15 01:53:32.919897] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.285 [2024-04-15 01:53:32.919925] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.933325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.933354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.944197] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.944225] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.954147] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.954173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.963298] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.963325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.973963] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.973990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.984363] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.984388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:32.994081] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:32.994124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.004406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.004436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.014840] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.014867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.027854] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.027882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.037350] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.037385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.047724] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.047751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.059212] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.059239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.068714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.068742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.079029] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.079066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.090065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.090109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.100855] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.100882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.112154] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.112184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.122512] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.122538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.132113] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.132141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.142725] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.142753] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.152859] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.152886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.166126] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.166155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.543 [2024-04-15 01:53:33.178522] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.543 [2024-04-15 01:53:33.178550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.191272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.191301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.201159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.201187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.210565] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.210592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.222486] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.222513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.232958] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.232985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.242784] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.242818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.252527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.252554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.263814] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.263840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.277307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.277335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.287428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.287455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.298889] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.298916] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.309127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.309155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.319862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.319888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.329809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.329836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.340511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.340539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.350954] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.350981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.360718] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.803 [2024-04-15 01:53:33.360745] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.803 [2024-04-15 01:53:33.370373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.370417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.382925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.382952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.394313] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.394341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.404276] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.404303] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.415795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.415823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.427164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.427192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:47.804 [2024-04-15 01:53:33.437214] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:47.804 [2024-04-15 01:53:33.437243] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.451037] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.451077] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.462337] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.462364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.476620] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.476647] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.486677] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.486704] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.497862] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.497888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.509774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.509801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.520983] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.521014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.531373] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.531400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.541270] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.541298] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.552738] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.552765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.562195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.562223] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.572547] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.572574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.583415] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.583441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.594227] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.594254] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.607965] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.607992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.618406] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.618433] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.628322] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.628363] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.641106] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.641133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.652096] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.652124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.662319] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.662361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.673439] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.673465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.683637] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.683663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.694639] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.694665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.062 [2024-04-15 01:53:33.704429] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.062 [2024-04-15 01:53:33.704455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.716003] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.716034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.726145] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.726173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.738080] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.738107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.749729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.749755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.760823] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.760849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.771997] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.772039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.782701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.782732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.794849] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.794876] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.804468] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.804494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.814925] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.814952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.826320] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.826347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.836857] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.836887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.846531] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.846556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.857211] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.857248] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.866525] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.866552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.876536] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.876562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.885802] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.885844] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.895927] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.895953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.907296] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.907324] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.917792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.320 [2024-04-15 01:53:33.917818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.320 [2024-04-15 01:53:33.929487] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.321 [2024-04-15 01:53:33.929513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.321 [2024-04-15 01:53:33.940599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.321 [2024-04-15 01:53:33.940625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.321 [2024-04-15 01:53:33.950062] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.321 [2024-04-15 01:53:33.950090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.321 [2024-04-15 01:53:33.961584] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.321 [2024-04-15 01:53:33.961611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:33.970571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:33.970597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:33.983478] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:33.983505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:33.992933] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:33.992974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.003688] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.003715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.014735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.014761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.024103] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.024130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.036968] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.036996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.048571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.048602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.058477] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.058502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.068325] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.068352] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.078095] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.078122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.090511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.090538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.100729] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.100755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.113408] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.113435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.124551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.124580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.135172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.135200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.145368] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.145394] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.159242] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.159271] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.171679] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.171706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.182187] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.182215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.193941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.193968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.202626] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.202652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.579 [2024-04-15 01:53:34.215509] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.579 [2024-04-15 01:53:34.215538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.229122] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.229151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.239856] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.239887] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.250445] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.250472] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.261416] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.261442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.271148] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.271187] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.283153] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.283181] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.294164] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.294192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.306412] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.306442] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.317236] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.317265] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.326388] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.326429] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.338653] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.338680] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.348282] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.348312] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.362588] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.362615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.372082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.372113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.382735] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.382766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.394281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.394309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.406613] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.406644] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.418471] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.418499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.428238] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.428266] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.439109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.439137] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.449041] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.449093] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.459433] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.459461] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.470943] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.470971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:48.838 [2024-04-15 01:53:34.481376] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:48.838 [2024-04-15 01:53:34.481410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.492527] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.492556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.503720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.503751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.512941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.512968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.523721] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.523747] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.533811] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.533839] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.545917] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.545944] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.557196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.557224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.566599] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.566626] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.578774] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.578801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.588156] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.588183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.598683] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.598710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.610038] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.610092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.620159] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.620185] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.629914] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.629940] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.640746] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.640777] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.650305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.650348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.663042] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.663080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.675747] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.675773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.686808] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.686856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.698908] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.698935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.708240] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.708268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.719822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.719850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.098 [2024-04-15 01:53:34.730172] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.098 [2024-04-15 01:53:34.730200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.099 [2024-04-15 01:53:34.741521] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.099 [2024-04-15 01:53:34.741549] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.752289] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.752316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.763006] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.763055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.772452] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.772480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.786492] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.786518] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.795603] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.795645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.806513] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.806540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.816966] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.816991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.827822] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.827849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.838792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.838823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.848196] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.848224] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.858245] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.858273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.869043] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.869095] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.880307] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.880335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.890424] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.890458] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.900511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.900552] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.911136] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.911164] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.920581] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.920608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.932753] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.932782] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.943252] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.943279] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.953127] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.953155] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.963162] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.963190] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.972074] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.972103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.983538] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.983581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.359 [2024-04-15 01:53:34.993961] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.359 [2024-04-15 01:53:34.994001] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.006658] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.006688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.015694] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.015721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.027600] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.027632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.036826] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.036853] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.049994] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.050039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.058887] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.058928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.069467] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.069494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.080448] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.080474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.094247] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.094275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.104414] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.104441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.113420] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.113447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.124348] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.124376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.135079] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.135108] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.147410] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.147438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.157737] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.157764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.166981] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.167009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.177847] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.177888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.187235] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.187262] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.197745] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.197771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.207714] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.207740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.217717] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.217744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.228515] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.228545] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.237905] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.237931] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.248341] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.248383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.618 [2024-04-15 01:53:35.259377] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.618 [2024-04-15 01:53:35.259404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.269222] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.269252] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.279109] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.279136] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.289504] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.289535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.301596] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.301622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.312082] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.312109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.322571] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.322601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.332308] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.332335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.341428] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.341471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.353272] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.353301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.363357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.363384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.373876] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.373904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.383598] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.383625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.394511] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.394538] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.406771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.406798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.417379] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.417406] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.427353] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.427382] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.439169] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.439196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.449915] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.449942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.460635] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.460661] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.470120] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.470147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.480281] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.480309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.492792] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.492819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.501987] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.502013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.512972] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.512998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:49.876 [2024-04-15 01:53:35.523229] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:49.876 [2024-04-15 01:53:35.523257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.533553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.135 [2024-04-15 01:53:35.533580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.545140] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.135 [2024-04-15 01:53:35.545168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.555608] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.135 [2024-04-15 01:53:35.555634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.567040] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.135 [2024-04-15 01:53:35.567076] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.576505] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.135 [2024-04-15 01:53:35.576532] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.135 [2024-04-15 01:53:35.589479] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.589507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.600691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.600722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.609568] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.609594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.620985] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.621012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.632300] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.632343] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.644256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.644283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.656520] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.656547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.666884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.666914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.678553] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.678581] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.691693] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.691720] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.704605] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.704633] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.715305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.715333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.725720] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.725748] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.737500] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.737530] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.746771] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.746798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.756773] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.756800] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.768941] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.768971] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.136 [2024-04-15 01:53:35.779357] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.136 [2024-04-15 01:53:35.779400] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.790551] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.790583] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.804305] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.804332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.814736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.814761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.827979] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.828006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.839257] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.839283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.849195] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.849222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.861817] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.861846] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.871299] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.871327] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 00:18:50.395 Latency(us) 00:18:50.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.395 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:50.395 Nvme1n1 : 5.01 11848.91 92.57 0.00 0.00 10789.81 2548.62 25049.32 00:18:50.395 =================================================================================================================== 00:18:50.395 Total : 11848.91 92.57 0.00 0.00 10789.81 2548.62 25049.32 00:18:50.395 [2024-04-15 01:53:35.876736] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.876764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.883691] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.883721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.891701] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.891729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.899777] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.899826] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.907795] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.907842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.915809] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.915856] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.923837] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.923885] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.931863] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.931911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.939884] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.939935] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.947899] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.947948] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.955926] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.955975] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.963946] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.963993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.971973] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.972023] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.979990] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.980038] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.988010] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.988064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:35.996027] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:35.996080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:36.004065] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:36.004127] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:36.012104] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:36.012147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:36.020071] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:36.020120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:36.028085] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:36.028111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.395 [2024-04-15 01:53:36.036151] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.395 [2024-04-15 01:53:36.036197] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.044175] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.044222] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.052188] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.052230] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.060160] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.060183] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.068231] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.068275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.076256] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.076301] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.084263] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.084300] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.092246] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.092268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.100267] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.100288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 [2024-04-15 01:53:36.108288] subsystem.c:1753:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:50.655 [2024-04-15 01:53:36.108311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:50.655 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2167242) - No such process 00:18:50.655 01:53:36 -- target/zcopy.sh@49 -- # wait 2167242 00:18:50.655 01:53:36 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:50.655 01:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.655 01:53:36 -- common/autotest_common.sh@10 -- # set +x 00:18:50.655 01:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.655 01:53:36 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:50.655 01:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.655 01:53:36 -- common/autotest_common.sh@10 -- # set +x 00:18:50.655 delay0 00:18:50.655 01:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.655 01:53:36 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:50.655 01:53:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.655 01:53:36 -- common/autotest_common.sh@10 -- # set +x 00:18:50.655 01:53:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.655 01:53:36 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:50.655 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.655 [2024-04-15 01:53:36.273311] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:57.260 Initializing NVMe Controllers 00:18:57.260 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:57.260 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:57.260 Initialization complete. Launching workers. 00:18:57.260 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 81 00:18:57.260 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 368, failed to submit 33 00:18:57.260 success 175, unsuccess 193, failed 0 00:18:57.260 01:53:42 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:57.260 01:53:42 -- target/zcopy.sh@60 -- # nvmftestfini 00:18:57.260 01:53:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:57.260 01:53:42 -- nvmf/common.sh@116 -- # sync 00:18:57.260 01:53:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:57.260 01:53:42 -- nvmf/common.sh@119 -- # set +e 00:18:57.260 01:53:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:57.260 01:53:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:57.260 rmmod nvme_tcp 00:18:57.260 rmmod nvme_fabrics 00:18:57.260 rmmod nvme_keyring 00:18:57.260 01:53:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:57.260 01:53:42 -- nvmf/common.sh@123 -- # set -e 00:18:57.260 01:53:42 -- nvmf/common.sh@124 -- # return 0 00:18:57.260 01:53:42 -- nvmf/common.sh@477 -- # '[' -n 2165857 ']' 00:18:57.260 01:53:42 -- nvmf/common.sh@478 -- # killprocess 2165857 00:18:57.260 01:53:42 -- common/autotest_common.sh@926 -- # '[' -z 2165857 ']' 00:18:57.260 01:53:42 -- common/autotest_common.sh@930 -- # kill -0 2165857 00:18:57.260 01:53:42 -- common/autotest_common.sh@931 -- # uname 00:18:57.260 01:53:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:57.260 01:53:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2165857 00:18:57.261 01:53:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:18:57.261 01:53:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:18:57.261 01:53:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2165857' 00:18:57.261 killing process with pid 2165857 00:18:57.261 01:53:42 -- common/autotest_common.sh@945 -- # kill 2165857 00:18:57.261 01:53:42 -- common/autotest_common.sh@950 -- # wait 2165857 00:18:57.261 01:53:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:57.261 01:53:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:57.261 01:53:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:57.261 01:53:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:57.261 01:53:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:57.261 01:53:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:57.261 01:53:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:57.261 01:53:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.805 01:53:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:18:59.805 00:18:59.805 real 0m28.567s 00:18:59.805 user 0m41.555s 00:18:59.805 sys 0m8.515s 00:18:59.805 01:53:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.805 01:53:44 -- common/autotest_common.sh@10 -- # set +x 00:18:59.805 ************************************ 00:18:59.805 END TEST nvmf_zcopy 00:18:59.805 ************************************ 00:18:59.805 01:53:44 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.805 01:53:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:59.805 01:53:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:59.805 01:53:44 -- common/autotest_common.sh@10 -- # set +x 00:18:59.805 ************************************ 00:18:59.805 START TEST nvmf_nmic 00:18:59.805 ************************************ 00:18:59.805 01:53:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:59.805 * Looking for test storage... 00:18:59.805 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:59.805 01:53:45 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:59.805 01:53:45 -- nvmf/common.sh@7 -- # uname -s 00:18:59.805 01:53:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:59.805 01:53:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:59.805 01:53:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:59.805 01:53:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:59.805 01:53:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:59.805 01:53:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:59.805 01:53:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:59.805 01:53:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:59.805 01:53:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:59.805 01:53:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:59.805 01:53:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.805 01:53:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:59.805 01:53:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:59.805 01:53:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:59.805 01:53:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:59.805 01:53:45 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:59.805 01:53:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:59.805 01:53:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:59.805 01:53:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:59.805 01:53:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.805 01:53:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.805 01:53:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.805 01:53:45 -- paths/export.sh@5 -- # export PATH 00:18:59.805 01:53:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:59.805 01:53:45 -- nvmf/common.sh@46 -- # : 0 00:18:59.805 01:53:45 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:59.805 01:53:45 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:59.805 01:53:45 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:59.805 01:53:45 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:59.805 01:53:45 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:59.805 01:53:45 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:59.805 01:53:45 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:59.805 01:53:45 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:59.805 01:53:45 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:59.805 01:53:45 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:59.805 01:53:45 -- target/nmic.sh@14 -- # nvmftestinit 00:18:59.805 01:53:45 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:59.805 01:53:45 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:59.805 01:53:45 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:59.805 01:53:45 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:59.805 01:53:45 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:59.805 01:53:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:59.805 01:53:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:59.805 01:53:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:59.805 01:53:45 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:59.805 01:53:45 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:59.805 01:53:45 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:59.805 01:53:45 -- common/autotest_common.sh@10 -- # set +x 00:19:01.711 01:53:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:01.711 01:53:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:01.711 01:53:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:01.712 01:53:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:01.712 01:53:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:01.712 01:53:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:01.712 01:53:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:01.712 01:53:46 -- nvmf/common.sh@294 -- # net_devs=() 00:19:01.712 01:53:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:01.712 01:53:46 -- nvmf/common.sh@295 -- # e810=() 00:19:01.712 01:53:46 -- nvmf/common.sh@295 -- # local -ga e810 00:19:01.712 01:53:46 -- nvmf/common.sh@296 -- # x722=() 00:19:01.712 01:53:46 -- nvmf/common.sh@296 -- # local -ga x722 00:19:01.712 01:53:46 -- nvmf/common.sh@297 -- # mlx=() 00:19:01.712 01:53:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:01.712 01:53:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:01.712 01:53:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:01.712 01:53:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:01.712 01:53:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.712 01:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:01.712 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:01.712 01:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:01.712 01:53:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:01.712 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:01.712 01:53:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.712 01:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.712 01:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.712 01:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:01.712 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:01.712 01:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.712 01:53:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:01.712 01:53:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:01.712 01:53:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:01.712 01:53:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:01.712 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:01.712 01:53:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:01.712 01:53:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:01.712 01:53:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:01.712 01:53:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:01.712 01:53:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:01.712 01:53:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:01.712 01:53:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:01.712 01:53:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:01.712 01:53:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:01.712 01:53:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:01.712 01:53:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:01.712 01:53:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:01.712 01:53:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:01.712 01:53:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:01.712 01:53:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:01.712 01:53:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:01.712 01:53:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:01.712 01:53:47 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:01.712 01:53:47 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:01.712 01:53:47 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:01.712 01:53:47 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:01.712 01:53:47 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:01.712 01:53:47 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:01.712 01:53:47 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:01.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:01.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:19:01.712 00:19:01.712 --- 10.0.0.2 ping statistics --- 00:19:01.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.712 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:19:01.712 01:53:47 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:01.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:01.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:19:01.712 00:19:01.712 --- 10.0.0.1 ping statistics --- 00:19:01.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:01.712 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:19:01.712 01:53:47 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:01.712 01:53:47 -- nvmf/common.sh@410 -- # return 0 00:19:01.712 01:53:47 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:01.712 01:53:47 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:01.712 01:53:47 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:01.712 01:53:47 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:01.712 01:53:47 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:01.712 01:53:47 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:01.712 01:53:47 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:01.712 01:53:47 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:01.713 01:53:47 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:01.713 01:53:47 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:01.713 01:53:47 -- common/autotest_common.sh@10 -- # set +x 00:19:01.713 01:53:47 -- nvmf/common.sh@469 -- # nvmfpid=2170682 00:19:01.713 01:53:47 -- nvmf/common.sh@470 -- # waitforlisten 2170682 00:19:01.713 01:53:47 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:01.713 01:53:47 -- common/autotest_common.sh@819 -- # '[' -z 2170682 ']' 00:19:01.713 01:53:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.713 01:53:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:01.713 01:53:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.713 01:53:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:01.713 01:53:47 -- common/autotest_common.sh@10 -- # set +x 00:19:01.713 [2024-04-15 01:53:47.199910] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:01.713 [2024-04-15 01:53:47.199983] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:01.713 EAL: No free 2048 kB hugepages reported on node 1 00:19:01.713 [2024-04-15 01:53:47.267568] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:01.713 [2024-04-15 01:53:47.356269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:01.713 [2024-04-15 01:53:47.356427] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:01.713 [2024-04-15 01:53:47.356445] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:01.713 [2024-04-15 01:53:47.356458] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:01.713 [2024-04-15 01:53:47.356536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:01.713 [2024-04-15 01:53:47.356571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:01.713 [2024-04-15 01:53:47.356690] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:01.713 [2024-04-15 01:53:47.356693] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.649 01:53:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:02.649 01:53:48 -- common/autotest_common.sh@852 -- # return 0 00:19:02.649 01:53:48 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:02.649 01:53:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 01:53:48 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.649 01:53:48 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 [2024-04-15 01:53:48.193721] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 Malloc0 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 [2024-04-15 01:53:48.247057] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:02.649 test case1: single bdev can't be used in multiple subsystems 00:19:02.649 01:53:48 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@28 -- # nmic_status=0 00:19:02.649 01:53:48 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 [2024-04-15 01:53:48.270905] bdev.c:7935:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:02.649 [2024-04-15 01:53:48.270934] subsystem.c:1779:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:02.649 [2024-04-15 01:53:48.270963] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:02.649 request: 00:19:02.649 { 00:19:02.649 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:02.649 "namespace": { 00:19:02.649 "bdev_name": "Malloc0" 00:19:02.649 }, 00:19:02.649 "method": "nvmf_subsystem_add_ns", 00:19:02.649 "req_id": 1 00:19:02.649 } 00:19:02.649 Got JSON-RPC error response 00:19:02.649 response: 00:19:02.649 { 00:19:02.649 "code": -32602, 00:19:02.649 "message": "Invalid parameters" 00:19:02.649 } 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@29 -- # nmic_status=1 00:19:02.649 01:53:48 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:02.649 01:53:48 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:02.649 Adding namespace failed - expected result. 00:19:02.649 01:53:48 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:02.649 test case2: host connect to nvmf target in multiple paths 00:19:02.649 01:53:48 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:02.649 01:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:02.649 01:53:48 -- common/autotest_common.sh@10 -- # set +x 00:19:02.649 [2024-04-15 01:53:48.279022] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:02.649 01:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:02.649 01:53:48 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:03.588 01:53:48 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:19:04.156 01:53:49 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:19:04.156 01:53:49 -- common/autotest_common.sh@1177 -- # local i=0 00:19:04.156 01:53:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:04.156 01:53:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:19:04.156 01:53:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:06.063 01:53:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:06.064 01:53:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:06.064 01:53:51 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:06.064 01:53:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:19:06.064 01:53:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:06.064 01:53:51 -- common/autotest_common.sh@1187 -- # return 0 00:19:06.064 01:53:51 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:06.064 [global] 00:19:06.064 thread=1 00:19:06.064 invalidate=1 00:19:06.064 rw=write 00:19:06.064 time_based=1 00:19:06.064 runtime=1 00:19:06.064 ioengine=libaio 00:19:06.064 direct=1 00:19:06.064 bs=4096 00:19:06.064 iodepth=1 00:19:06.064 norandommap=0 00:19:06.064 numjobs=1 00:19:06.064 00:19:06.064 verify_dump=1 00:19:06.064 verify_backlog=512 00:19:06.064 verify_state_save=0 00:19:06.064 do_verify=1 00:19:06.064 verify=crc32c-intel 00:19:06.064 [job0] 00:19:06.064 filename=/dev/nvme0n1 00:19:06.064 Could not set queue depth (nvme0n1) 00:19:06.321 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:06.321 fio-3.35 00:19:06.321 Starting 1 thread 00:19:07.698 00:19:07.698 job0: (groupid=0, jobs=1): err= 0: pid=2171346: Mon Apr 15 01:53:52 2024 00:19:07.698 read: IOPS=869, BW=3477KiB/s (3560kB/s)(3480KiB/1001msec) 00:19:07.698 slat (nsec): min=5777, max=57538, avg=12957.60, stdev=8388.25 00:19:07.698 clat (usec): min=514, max=1085, avg=607.84, stdev=58.94 00:19:07.698 lat (usec): min=520, max=1091, avg=620.79, stdev=65.38 00:19:07.698 clat percentiles (usec): 00:19:07.698 | 1.00th=[ 537], 5.00th=[ 553], 10.00th=[ 562], 20.00th=[ 570], 00:19:07.698 | 30.00th=[ 578], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 611], 00:19:07.698 | 70.00th=[ 619], 80.00th=[ 635], 90.00th=[ 668], 95.00th=[ 742], 00:19:07.698 | 99.00th=[ 807], 99.50th=[ 824], 99.90th=[ 1090], 99.95th=[ 1090], 00:19:07.698 | 99.99th=[ 1090] 00:19:07.698 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:07.698 slat (usec): min=7, max=29844, avg=51.46, stdev=932.04 00:19:07.698 clat (usec): min=269, max=1393, avg=389.03, stdev=76.84 00:19:07.698 lat (usec): min=279, max=30312, avg=440.49, stdev=938.17 00:19:07.698 clat percentiles (usec): 00:19:07.698 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 314], 20.00th=[ 322], 00:19:07.698 | 30.00th=[ 343], 40.00th=[ 367], 50.00th=[ 396], 60.00th=[ 404], 00:19:07.698 | 70.00th=[ 416], 80.00th=[ 437], 90.00th=[ 478], 95.00th=[ 498], 00:19:07.698 | 99.00th=[ 519], 99.50th=[ 553], 99.90th=[ 1385], 99.95th=[ 1401], 00:19:07.698 | 99.99th=[ 1401] 00:19:07.698 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:19:07.698 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:07.698 lat (usec) : 500=51.48%, 750=46.36%, 1000=1.95% 00:19:07.698 lat (msec) : 2=0.21% 00:19:07.698 cpu : usr=2.50%, sys=4.40%, ctx=1896, majf=0, minf=2 00:19:07.698 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:07.698 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.698 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:07.698 issued rwts: total=870,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:07.698 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:07.698 00:19:07.698 Run status group 0 (all jobs): 00:19:07.698 READ: bw=3477KiB/s (3560kB/s), 3477KiB/s-3477KiB/s (3560kB/s-3560kB/s), io=3480KiB (3564kB), run=1001-1001msec 00:19:07.698 WRITE: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:19:07.698 00:19:07.698 Disk stats (read/write): 00:19:07.698 nvme0n1: ios=752/1024, merge=0/0, ticks=1410/327, in_queue=1737, util=98.70% 00:19:07.698 01:53:52 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:19:07.698 01:53:53 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.698 01:53:53 -- common/autotest_common.sh@1198 -- # local i=0 00:19:07.698 01:53:53 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:07.698 01:53:53 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.698 01:53:53 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:07.698 01:53:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.698 01:53:53 -- common/autotest_common.sh@1210 -- # return 0 00:19:07.698 01:53:53 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:19:07.698 01:53:53 -- target/nmic.sh@53 -- # nvmftestfini 00:19:07.698 01:53:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:07.698 01:53:53 -- nvmf/common.sh@116 -- # sync 00:19:07.698 01:53:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:07.698 01:53:53 -- nvmf/common.sh@119 -- # set +e 00:19:07.698 01:53:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:07.698 01:53:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:07.698 rmmod nvme_tcp 00:19:07.698 rmmod nvme_fabrics 00:19:07.698 rmmod nvme_keyring 00:19:07.698 01:53:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:07.698 01:53:53 -- nvmf/common.sh@123 -- # set -e 00:19:07.698 01:53:53 -- nvmf/common.sh@124 -- # return 0 00:19:07.698 01:53:53 -- nvmf/common.sh@477 -- # '[' -n 2170682 ']' 00:19:07.698 01:53:53 -- nvmf/common.sh@478 -- # killprocess 2170682 00:19:07.698 01:53:53 -- common/autotest_common.sh@926 -- # '[' -z 2170682 ']' 00:19:07.698 01:53:53 -- common/autotest_common.sh@930 -- # kill -0 2170682 00:19:07.698 01:53:53 -- common/autotest_common.sh@931 -- # uname 00:19:07.698 01:53:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:07.698 01:53:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2170682 00:19:07.698 01:53:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:07.698 01:53:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:07.698 01:53:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2170682' 00:19:07.698 killing process with pid 2170682 00:19:07.698 01:53:53 -- common/autotest_common.sh@945 -- # kill 2170682 00:19:07.698 01:53:53 -- common/autotest_common.sh@950 -- # wait 2170682 00:19:07.957 01:53:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:07.957 01:53:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:07.957 01:53:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:07.957 01:53:53 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:07.957 01:53:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:07.957 01:53:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.957 01:53:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.957 01:53:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:09.867 01:53:55 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:09.867 00:19:09.867 real 0m10.536s 00:19:09.867 user 0m25.246s 00:19:09.867 sys 0m2.425s 00:19:09.867 01:53:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.867 01:53:55 -- common/autotest_common.sh@10 -- # set +x 00:19:09.867 ************************************ 00:19:09.867 END TEST nvmf_nmic 00:19:09.867 ************************************ 00:19:09.867 01:53:55 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:09.867 01:53:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:09.867 01:53:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:09.867 01:53:55 -- common/autotest_common.sh@10 -- # set +x 00:19:10.125 ************************************ 00:19:10.125 START TEST nvmf_fio_target 00:19:10.125 ************************************ 00:19:10.125 01:53:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:19:10.125 * Looking for test storage... 00:19:10.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.125 01:53:55 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.125 01:53:55 -- nvmf/common.sh@7 -- # uname -s 00:19:10.125 01:53:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.125 01:53:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.125 01:53:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.125 01:53:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.125 01:53:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.125 01:53:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.125 01:53:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.125 01:53:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.126 01:53:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.126 01:53:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.126 01:53:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.126 01:53:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.126 01:53:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.126 01:53:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.126 01:53:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.126 01:53:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.126 01:53:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.126 01:53:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.126 01:53:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.126 01:53:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.126 01:53:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.126 01:53:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.126 01:53:55 -- paths/export.sh@5 -- # export PATH 00:19:10.126 01:53:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.126 01:53:55 -- nvmf/common.sh@46 -- # : 0 00:19:10.126 01:53:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:10.126 01:53:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:10.126 01:53:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:10.126 01:53:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.126 01:53:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.126 01:53:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:10.126 01:53:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:10.126 01:53:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:10.126 01:53:55 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.126 01:53:55 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.126 01:53:55 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:10.126 01:53:55 -- target/fio.sh@16 -- # nvmftestinit 00:19:10.126 01:53:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:10.126 01:53:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.126 01:53:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:10.126 01:53:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:10.126 01:53:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:10.126 01:53:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.126 01:53:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.126 01:53:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.126 01:53:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:10.126 01:53:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:10.126 01:53:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:10.126 01:53:55 -- common/autotest_common.sh@10 -- # set +x 00:19:12.035 01:53:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:12.035 01:53:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:12.035 01:53:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:12.035 01:53:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:12.035 01:53:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:12.035 01:53:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:12.035 01:53:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:12.035 01:53:57 -- nvmf/common.sh@294 -- # net_devs=() 00:19:12.035 01:53:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:12.035 01:53:57 -- nvmf/common.sh@295 -- # e810=() 00:19:12.035 01:53:57 -- nvmf/common.sh@295 -- # local -ga e810 00:19:12.035 01:53:57 -- nvmf/common.sh@296 -- # x722=() 00:19:12.035 01:53:57 -- nvmf/common.sh@296 -- # local -ga x722 00:19:12.035 01:53:57 -- nvmf/common.sh@297 -- # mlx=() 00:19:12.035 01:53:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:12.035 01:53:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.035 01:53:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:12.035 01:53:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:12.035 01:53:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.035 01:53:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.035 01:53:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:12.035 01:53:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.035 01:53:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.035 01:53:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.035 01:53:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.035 01:53:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.035 01:53:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.035 01:53:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:12.035 01:53:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.035 01:53:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.035 01:53:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.035 01:53:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.035 01:53:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:12.035 01:53:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:12.035 01:53:57 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:12.035 01:53:57 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.035 01:53:57 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.035 01:53:57 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.035 01:53:57 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:12.035 01:53:57 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.035 01:53:57 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.035 01:53:57 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:12.035 01:53:57 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.035 01:53:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.035 01:53:57 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:12.035 01:53:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:12.035 01:53:57 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.035 01:53:57 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.035 01:53:57 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.035 01:53:57 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.035 01:53:57 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:12.035 01:53:57 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.295 01:53:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.295 01:53:57 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.295 01:53:57 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:12.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:19:12.295 00:19:12.295 --- 10.0.0.2 ping statistics --- 00:19:12.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.295 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:19:12.295 01:53:57 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:19:12.295 00:19:12.295 --- 10.0.0.1 ping statistics --- 00:19:12.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.295 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:19:12.295 01:53:57 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.295 01:53:57 -- nvmf/common.sh@410 -- # return 0 00:19:12.295 01:53:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:12.295 01:53:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.295 01:53:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:12.295 01:53:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:12.295 01:53:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.295 01:53:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:12.295 01:53:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:12.295 01:53:57 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:19:12.295 01:53:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:12.295 01:53:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:12.295 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:19:12.295 01:53:57 -- nvmf/common.sh@469 -- # nvmfpid=2173437 00:19:12.295 01:53:57 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:12.295 01:53:57 -- nvmf/common.sh@470 -- # waitforlisten 2173437 00:19:12.295 01:53:57 -- common/autotest_common.sh@819 -- # '[' -z 2173437 ']' 00:19:12.295 01:53:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.295 01:53:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:12.295 01:53:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.295 01:53:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:12.295 01:53:57 -- common/autotest_common.sh@10 -- # set +x 00:19:12.295 [2024-04-15 01:53:57.808442] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:12.295 [2024-04-15 01:53:57.808527] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.295 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.295 [2024-04-15 01:53:57.879246] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.554 [2024-04-15 01:53:57.969351] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:12.554 [2024-04-15 01:53:57.969533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.554 [2024-04-15 01:53:57.969554] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.554 [2024-04-15 01:53:57.969569] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.554 [2024-04-15 01:53:57.969656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:12.554 [2024-04-15 01:53:57.969726] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.554 [2024-04-15 01:53:57.969816] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:12.554 [2024-04-15 01:53:57.969819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.154 01:53:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:13.154 01:53:58 -- common/autotest_common.sh@852 -- # return 0 00:19:13.154 01:53:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.154 01:53:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:13.154 01:53:58 -- common/autotest_common.sh@10 -- # set +x 00:19:13.154 01:53:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.154 01:53:58 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:19:13.411 [2024-04-15 01:53:58.982388] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.411 01:53:59 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.670 01:53:59 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:19:13.670 01:53:59 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:13.928 01:53:59 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:19:13.928 01:53:59 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.187 01:53:59 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:19:14.187 01:53:59 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.447 01:54:00 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:19:14.447 01:54:00 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:19:14.706 01:54:00 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:14.964 01:54:00 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:19:14.964 01:54:00 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.223 01:54:00 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:19:15.223 01:54:00 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:19:15.481 01:54:01 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:19:15.481 01:54:01 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:19:15.739 01:54:01 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:15.997 01:54:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:15.997 01:54:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:16.255 01:54:01 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:19:16.255 01:54:01 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:16.513 01:54:02 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:16.778 [2024-04-15 01:54:02.330626] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.778 01:54:02 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:19:17.037 01:54:02 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:19:17.295 01:54:02 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:18.233 01:54:03 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:19:18.233 01:54:03 -- common/autotest_common.sh@1177 -- # local i=0 00:19:18.233 01:54:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:19:18.233 01:54:03 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:19:18.233 01:54:03 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:19:18.233 01:54:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:19:20.135 01:54:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:19:20.135 01:54:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:19:20.135 01:54:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:19:20.135 01:54:05 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:19:20.135 01:54:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:19:20.135 01:54:05 -- common/autotest_common.sh@1187 -- # return 0 00:19:20.135 01:54:05 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:19:20.135 [global] 00:19:20.135 thread=1 00:19:20.135 invalidate=1 00:19:20.135 rw=write 00:19:20.135 time_based=1 00:19:20.135 runtime=1 00:19:20.135 ioengine=libaio 00:19:20.135 direct=1 00:19:20.135 bs=4096 00:19:20.135 iodepth=1 00:19:20.135 norandommap=0 00:19:20.135 numjobs=1 00:19:20.135 00:19:20.135 verify_dump=1 00:19:20.135 verify_backlog=512 00:19:20.135 verify_state_save=0 00:19:20.135 do_verify=1 00:19:20.135 verify=crc32c-intel 00:19:20.135 [job0] 00:19:20.135 filename=/dev/nvme0n1 00:19:20.135 [job1] 00:19:20.135 filename=/dev/nvme0n2 00:19:20.135 [job2] 00:19:20.135 filename=/dev/nvme0n3 00:19:20.135 [job3] 00:19:20.135 filename=/dev/nvme0n4 00:19:20.135 Could not set queue depth (nvme0n1) 00:19:20.135 Could not set queue depth (nvme0n2) 00:19:20.135 Could not set queue depth (nvme0n3) 00:19:20.135 Could not set queue depth (nvme0n4) 00:19:20.394 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.394 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.394 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.394 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:20.394 fio-3.35 00:19:20.394 Starting 4 threads 00:19:21.770 00:19:21.770 job0: (groupid=0, jobs=1): err= 0: pid=2174660: Mon Apr 15 01:54:07 2024 00:19:21.770 read: IOPS=18, BW=75.2KiB/s (77.1kB/s)(76.0KiB/1010msec) 00:19:21.770 slat (nsec): min=12387, max=39625, avg=26515.58, stdev=9978.96 00:19:21.770 clat (usec): min=951, max=42154, avg=39242.50, stdev=9284.27 00:19:21.770 lat (usec): min=963, max=42167, avg=39269.02, stdev=9287.40 00:19:21.770 clat percentiles (usec): 00:19:21.770 | 1.00th=[ 955], 5.00th=[ 955], 10.00th=[40633], 20.00th=[41157], 00:19:21.770 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:21.770 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:19:21.770 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:19:21.770 | 99.99th=[42206] 00:19:21.770 write: IOPS=506, BW=2028KiB/s (2076kB/s)(2048KiB/1010msec); 0 zone resets 00:19:21.770 slat (nsec): min=7774, max=39952, avg=15795.03, stdev=6258.33 00:19:21.770 clat (usec): min=279, max=1691, avg=494.29, stdev=165.61 00:19:21.770 lat (usec): min=288, max=1707, avg=510.09, stdev=166.32 00:19:21.770 clat percentiles (usec): 00:19:21.770 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 302], 20.00th=[ 330], 00:19:21.770 | 30.00th=[ 396], 40.00th=[ 437], 50.00th=[ 469], 60.00th=[ 502], 00:19:21.770 | 70.00th=[ 578], 80.00th=[ 635], 90.00th=[ 693], 95.00th=[ 758], 00:19:21.770 | 99.00th=[ 947], 99.50th=[ 971], 99.90th=[ 1696], 99.95th=[ 1696], 00:19:21.770 | 99.99th=[ 1696] 00:19:21.770 bw ( KiB/s): min= 4096, max= 4096, per=24.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.770 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.770 lat (usec) : 500=57.44%, 750=33.33%, 1000=5.46% 00:19:21.770 lat (msec) : 2=0.38%, 50=3.39% 00:19:21.770 cpu : usr=0.59%, sys=0.89%, ctx=532, majf=0, minf=2 00:19:21.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.770 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.770 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.770 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.770 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.770 job1: (groupid=0, jobs=1): err= 0: pid=2174661: Mon Apr 15 01:54:07 2024 00:19:21.770 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:21.770 slat (nsec): min=5744, max=73515, avg=25160.24, stdev=11375.14 00:19:21.770 clat (usec): min=398, max=703, avg=516.40, stdev=54.94 00:19:21.770 lat (usec): min=411, max=765, avg=541.56, stdev=57.77 00:19:21.770 clat percentiles (usec): 00:19:21.770 | 1.00th=[ 424], 5.00th=[ 441], 10.00th=[ 457], 20.00th=[ 465], 00:19:21.770 | 30.00th=[ 478], 40.00th=[ 490], 50.00th=[ 506], 60.00th=[ 529], 00:19:21.770 | 70.00th=[ 553], 80.00th=[ 562], 90.00th=[ 586], 95.00th=[ 619], 00:19:21.770 | 99.00th=[ 660], 99.50th=[ 660], 99.90th=[ 693], 99.95th=[ 701], 00:19:21.770 | 99.99th=[ 701] 00:19:21.770 write: IOPS=1277, BW=5111KiB/s (5234kB/s)(5116KiB/1001msec); 0 zone resets 00:19:21.770 slat (nsec): min=6408, max=79529, avg=16899.36, stdev=7964.35 00:19:21.770 clat (usec): min=265, max=552, avg=318.20, stdev=43.85 00:19:21.770 lat (usec): min=272, max=579, avg=335.10, stdev=44.17 00:19:21.770 clat percentiles (usec): 00:19:21.770 | 1.00th=[ 269], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:19:21.770 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:19:21.770 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 396], 00:19:21.770 | 99.00th=[ 461], 99.50th=[ 486], 99.90th=[ 545], 99.95th=[ 553], 00:19:21.770 | 99.99th=[ 553] 00:19:21.770 bw ( KiB/s): min= 5288, max= 5288, per=32.13%, avg=5288.00, stdev= 0.00, samples=1 00:19:21.770 iops : min= 1322, max= 1322, avg=1322.00, stdev= 0.00, samples=1 00:19:21.770 lat (usec) : 500=75.81%, 750=24.19% 00:19:21.770 cpu : usr=2.10%, sys=5.30%, ctx=2305, majf=0, minf=1 00:19:21.770 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 issued rwts: total=1024,1279,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.771 job2: (groupid=0, jobs=1): err= 0: pid=2174662: Mon Apr 15 01:54:07 2024 00:19:21.771 read: IOPS=839, BW=3357KiB/s (3437kB/s)(3360KiB/1001msec) 00:19:21.771 slat (nsec): min=6443, max=70248, avg=19587.36, stdev=9757.83 00:19:21.771 clat (usec): min=538, max=1025, avg=699.20, stdev=89.34 00:19:21.771 lat (usec): min=546, max=1059, avg=718.78, stdev=90.53 00:19:21.771 clat percentiles (usec): 00:19:21.771 | 1.00th=[ 562], 5.00th=[ 578], 10.00th=[ 594], 20.00th=[ 619], 00:19:21.771 | 30.00th=[ 644], 40.00th=[ 660], 50.00th=[ 685], 60.00th=[ 709], 00:19:21.771 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 832], 95.00th=[ 865], 00:19:21.771 | 99.00th=[ 947], 99.50th=[ 979], 99.90th=[ 1029], 99.95th=[ 1029], 00:19:21.771 | 99.99th=[ 1029] 00:19:21.771 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:19:21.771 slat (nsec): min=7395, max=76080, avg=17639.19, stdev=9667.96 00:19:21.771 clat (usec): min=274, max=963, avg=360.60, stdev=68.47 00:19:21.771 lat (usec): min=282, max=973, avg=378.24, stdev=71.26 00:19:21.771 clat percentiles (usec): 00:19:21.771 | 1.00th=[ 281], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:19:21.771 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 355], 00:19:21.771 | 70.00th=[ 383], 80.00th=[ 412], 90.00th=[ 453], 95.00th=[ 482], 00:19:21.771 | 99.00th=[ 570], 99.50th=[ 619], 99.90th=[ 824], 99.95th=[ 963], 00:19:21.771 | 99.99th=[ 963] 00:19:21.771 bw ( KiB/s): min= 4096, max= 4096, per=24.89%, avg=4096.00, stdev= 0.00, samples=1 00:19:21.771 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:21.771 lat (usec) : 500=53.06%, 750=34.66%, 1000=12.18% 00:19:21.771 lat (msec) : 2=0.11% 00:19:21.771 cpu : usr=2.10%, sys=5.00%, ctx=1865, majf=0, minf=1 00:19:21.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 issued rwts: total=840,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.771 job3: (groupid=0, jobs=1): err= 0: pid=2174663: Mon Apr 15 01:54:07 2024 00:19:21.771 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:21.771 slat (nsec): min=6230, max=78563, avg=23871.46, stdev=10554.82 00:19:21.771 clat (usec): min=406, max=699, avg=486.17, stdev=33.63 00:19:21.771 lat (usec): min=430, max=725, avg=510.04, stdev=32.88 00:19:21.771 clat percentiles (usec): 00:19:21.771 | 1.00th=[ 429], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 465], 00:19:21.771 | 30.00th=[ 474], 40.00th=[ 478], 50.00th=[ 482], 60.00th=[ 486], 00:19:21.771 | 70.00th=[ 494], 80.00th=[ 502], 90.00th=[ 523], 95.00th=[ 553], 00:19:21.771 | 99.00th=[ 611], 99.50th=[ 635], 99.90th=[ 676], 99.95th=[ 701], 00:19:21.771 | 99.99th=[ 701] 00:19:21.771 write: IOPS=1339, BW=5359KiB/s (5487kB/s)(5364KiB/1001msec); 0 zone resets 00:19:21.771 slat (nsec): min=6178, max=49563, avg=16091.70, stdev=6999.34 00:19:21.771 clat (usec): min=266, max=1087, avg=330.52, stdev=66.01 00:19:21.771 lat (usec): min=272, max=1103, avg=346.61, stdev=65.90 00:19:21.771 clat percentiles (usec): 00:19:21.771 | 1.00th=[ 273], 5.00th=[ 277], 10.00th=[ 281], 20.00th=[ 285], 00:19:21.771 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 314], 60.00th=[ 318], 00:19:21.771 | 70.00th=[ 326], 80.00th=[ 371], 90.00th=[ 420], 95.00th=[ 478], 00:19:21.771 | 99.00th=[ 553], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 1090], 00:19:21.771 | 99.99th=[ 1090] 00:19:21.771 bw ( KiB/s): min= 5120, max= 5120, per=31.11%, avg=5120.00, stdev= 0.00, samples=1 00:19:21.771 iops : min= 1280, max= 1280, avg=1280.00, stdev= 0.00, samples=1 00:19:21.771 lat (usec) : 500=88.50%, 750=11.46% 00:19:21.771 lat (msec) : 2=0.04% 00:19:21.771 cpu : usr=2.20%, sys=5.20%, ctx=2365, majf=0, minf=1 00:19:21.771 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:21.771 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:21.771 issued rwts: total=1024,1341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:21.771 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:21.771 00:19:21.771 Run status group 0 (all jobs): 00:19:21.771 READ: bw=11.2MiB/s (11.8MB/s), 75.2KiB/s-4092KiB/s (77.1kB/s-4190kB/s), io=11.4MiB (11.9MB), run=1001-1010msec 00:19:21.771 WRITE: bw=16.1MiB/s (16.9MB/s), 2028KiB/s-5359KiB/s (2076kB/s-5487kB/s), io=16.2MiB (17.0MB), run=1001-1010msec 00:19:21.771 00:19:21.771 Disk stats (read/write): 00:19:21.771 nvme0n1: ios=65/512, merge=0/0, ticks=638/249, in_queue=887, util=87.78% 00:19:21.771 nvme0n2: ios=936/1024, merge=0/0, ticks=759/332, in_queue=1091, util=97.45% 00:19:21.771 nvme0n3: ios=626/1024, merge=0/0, ticks=433/353, in_queue=786, util=88.76% 00:19:21.771 nvme0n4: ios=932/1024, merge=0/0, ticks=449/347, in_queue=796, util=89.51% 00:19:21.771 01:54:07 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:19:21.771 [global] 00:19:21.771 thread=1 00:19:21.771 invalidate=1 00:19:21.771 rw=randwrite 00:19:21.771 time_based=1 00:19:21.771 runtime=1 00:19:21.771 ioengine=libaio 00:19:21.771 direct=1 00:19:21.771 bs=4096 00:19:21.771 iodepth=1 00:19:21.771 norandommap=0 00:19:21.771 numjobs=1 00:19:21.771 00:19:21.771 verify_dump=1 00:19:21.771 verify_backlog=512 00:19:21.771 verify_state_save=0 00:19:21.771 do_verify=1 00:19:21.771 verify=crc32c-intel 00:19:21.771 [job0] 00:19:21.771 filename=/dev/nvme0n1 00:19:21.771 [job1] 00:19:21.771 filename=/dev/nvme0n2 00:19:21.771 [job2] 00:19:21.771 filename=/dev/nvme0n3 00:19:21.771 [job3] 00:19:21.771 filename=/dev/nvme0n4 00:19:21.771 Could not set queue depth (nvme0n1) 00:19:21.771 Could not set queue depth (nvme0n2) 00:19:21.771 Could not set queue depth (nvme0n3) 00:19:21.771 Could not set queue depth (nvme0n4) 00:19:21.771 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.771 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.771 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.771 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:21.771 fio-3.35 00:19:21.771 Starting 4 threads 00:19:23.146 00:19:23.146 job0: (groupid=0, jobs=1): err= 0: pid=2175085: Mon Apr 15 01:54:08 2024 00:19:23.146 read: IOPS=22, BW=89.5KiB/s (91.6kB/s)(92.0KiB/1028msec) 00:19:23.146 slat (nsec): min=7479, max=37097, avg=18159.61, stdev=7335.18 00:19:23.146 clat (usec): min=590, max=41351, avg=34012.06, stdev=15568.41 00:19:23.146 lat (usec): min=598, max=41369, avg=34030.22, stdev=15569.80 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 594], 5.00th=[ 603], 10.00th=[ 848], 20.00th=[40633], 00:19:23.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:23.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:23.146 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:23.146 | 99.99th=[41157] 00:19:23.146 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:19:23.146 slat (nsec): min=6935, max=38465, avg=14147.08, stdev=5685.57 00:19:23.146 clat (usec): min=274, max=1348, avg=461.22, stdev=117.72 00:19:23.146 lat (usec): min=283, max=1358, avg=475.37, stdev=116.84 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 314], 20.00th=[ 367], 00:19:23.146 | 30.00th=[ 408], 40.00th=[ 433], 50.00th=[ 449], 60.00th=[ 469], 00:19:23.146 | 70.00th=[ 494], 80.00th=[ 529], 90.00th=[ 660], 95.00th=[ 685], 00:19:23.146 | 99.00th=[ 725], 99.50th=[ 791], 99.90th=[ 1352], 99.95th=[ 1352], 00:19:23.146 | 99.99th=[ 1352] 00:19:23.146 bw ( KiB/s): min= 4096, max= 4096, per=37.45%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.146 lat (usec) : 500=69.72%, 750=25.79%, 1000=0.37% 00:19:23.146 lat (msec) : 2=0.56%, 50=3.55% 00:19:23.146 cpu : usr=0.10%, sys=1.27%, ctx=537, majf=0, minf=2 00:19:23.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.146 job1: (groupid=0, jobs=1): err= 0: pid=2175101: Mon Apr 15 01:54:08 2024 00:19:23.146 read: IOPS=96, BW=388KiB/s (397kB/s)(400KiB/1032msec) 00:19:23.146 slat (nsec): min=8086, max=36145, avg=12113.28, stdev=6535.83 00:19:23.146 clat (usec): min=521, max=41737, avg=7479.47, stdev=15267.18 00:19:23.146 lat (usec): min=530, max=41769, avg=7491.58, stdev=15271.38 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 523], 5.00th=[ 537], 10.00th=[ 537], 20.00th=[ 545], 00:19:23.146 | 30.00th=[ 545], 40.00th=[ 562], 50.00th=[ 611], 60.00th=[ 676], 00:19:23.146 | 70.00th=[ 685], 80.00th=[ 701], 90.00th=[41157], 95.00th=[41157], 00:19:23.146 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:19:23.146 | 99.99th=[41681] 00:19:23.146 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:19:23.146 slat (nsec): min=7890, max=41486, avg=15029.91, stdev=5270.44 00:19:23.146 clat (usec): min=279, max=1268, avg=533.05, stdev=200.29 00:19:23.146 lat (usec): min=288, max=1283, avg=548.08, stdev=201.38 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 289], 5.00th=[ 297], 10.00th=[ 326], 20.00th=[ 404], 00:19:23.146 | 30.00th=[ 424], 40.00th=[ 453], 50.00th=[ 482], 60.00th=[ 510], 00:19:23.146 | 70.00th=[ 562], 80.00th=[ 611], 90.00th=[ 898], 95.00th=[ 1029], 00:19:23.146 | 99.00th=[ 1188], 99.50th=[ 1221], 99.90th=[ 1270], 99.95th=[ 1270], 00:19:23.146 | 99.99th=[ 1270] 00:19:23.146 bw ( KiB/s): min= 4096, max= 4096, per=37.45%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.146 lat (usec) : 500=47.06%, 750=40.52%, 1000=4.74% 00:19:23.146 lat (msec) : 2=4.90%, 50=2.78% 00:19:23.146 cpu : usr=0.58%, sys=1.16%, ctx=612, majf=0, minf=1 00:19:23.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 issued rwts: total=100,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.146 job2: (groupid=0, jobs=1): err= 0: pid=2175133: Mon Apr 15 01:54:08 2024 00:19:23.146 read: IOPS=20, BW=81.2KiB/s (83.1kB/s)(84.0KiB/1035msec) 00:19:23.146 slat (nsec): min=8437, max=35981, avg=19331.43, stdev=7232.65 00:19:23.146 clat (usec): min=40874, max=41295, avg=40993.56, stdev=85.43 00:19:23.146 lat (usec): min=40910, max=41303, avg=41012.89, stdev=81.32 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:19:23.146 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:23.146 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:23.146 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:23.146 | 99.99th=[41157] 00:19:23.146 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:19:23.146 slat (nsec): min=7681, max=35861, avg=12751.43, stdev=5332.20 00:19:23.146 clat (usec): min=272, max=1723, avg=322.98, stdev=75.28 00:19:23.146 lat (usec): min=282, max=1735, avg=335.73, stdev=75.82 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 277], 5.00th=[ 281], 10.00th=[ 281], 20.00th=[ 285], 00:19:23.146 | 30.00th=[ 293], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 318], 00:19:23.146 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 392], 95.00th=[ 408], 00:19:23.146 | 99.00th=[ 465], 99.50th=[ 478], 99.90th=[ 1729], 99.95th=[ 1729], 00:19:23.146 | 99.99th=[ 1729] 00:19:23.146 bw ( KiB/s): min= 4096, max= 4096, per=37.45%, avg=4096.00, stdev= 0.00, samples=1 00:19:23.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:19:23.146 lat (usec) : 500=95.87% 00:19:23.146 lat (msec) : 2=0.19%, 50=3.94% 00:19:23.146 cpu : usr=0.39%, sys=0.87%, ctx=534, majf=0, minf=1 00:19:23.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.146 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.146 job3: (groupid=0, jobs=1): err= 0: pid=2175143: Mon Apr 15 01:54:08 2024 00:19:23.146 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:19:23.146 slat (nsec): min=5726, max=41579, avg=10442.29, stdev=4843.74 00:19:23.146 clat (usec): min=408, max=981, avg=495.56, stdev=43.48 00:19:23.146 lat (usec): min=417, max=998, avg=506.00, stdev=42.90 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 416], 5.00th=[ 424], 10.00th=[ 433], 20.00th=[ 457], 00:19:23.146 | 30.00th=[ 482], 40.00th=[ 498], 50.00th=[ 506], 60.00th=[ 510], 00:19:23.146 | 70.00th=[ 515], 80.00th=[ 523], 90.00th=[ 529], 95.00th=[ 545], 00:19:23.146 | 99.00th=[ 603], 99.50th=[ 635], 99.90th=[ 914], 99.95th=[ 979], 00:19:23.146 | 99.99th=[ 979] 00:19:23.146 write: IOPS=1292, BW=5171KiB/s (5295kB/s)(5176KiB/1001msec); 0 zone resets 00:19:23.146 slat (nsec): min=7012, max=50940, avg=15116.98, stdev=6742.93 00:19:23.146 clat (usec): min=274, max=1927, avg=350.76, stdev=101.94 00:19:23.146 lat (usec): min=283, max=1943, avg=365.88, stdev=101.38 00:19:23.146 clat percentiles (usec): 00:19:23.146 | 1.00th=[ 281], 5.00th=[ 285], 10.00th=[ 289], 20.00th=[ 297], 00:19:23.146 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 322], 60.00th=[ 330], 00:19:23.146 | 70.00th=[ 343], 80.00th=[ 355], 90.00th=[ 461], 95.00th=[ 578], 00:19:23.146 | 99.00th=[ 693], 99.50th=[ 725], 99.90th=[ 914], 99.95th=[ 1926], 00:19:23.146 | 99.99th=[ 1926] 00:19:23.146 bw ( KiB/s): min= 4488, max= 4488, per=41.03%, avg=4488.00, stdev= 0.00, samples=1 00:19:23.146 iops : min= 1122, max= 1122, avg=1122.00, stdev= 0.00, samples=1 00:19:23.146 lat (usec) : 500=69.84%, 750=29.94%, 1000=0.17% 00:19:23.146 lat (msec) : 2=0.04% 00:19:23.147 cpu : usr=2.70%, sys=3.70%, ctx=2319, majf=0, minf=1 00:19:23.147 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:23.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:23.147 issued rwts: total=1024,1294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:23.147 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:23.147 00:19:23.147 Run status group 0 (all jobs): 00:19:23.147 READ: bw=4514KiB/s (4622kB/s), 81.2KiB/s-4092KiB/s (83.1kB/s-4190kB/s), io=4672KiB (4784kB), run=1001-1035msec 00:19:23.147 WRITE: bw=10.7MiB/s (11.2MB/s), 1979KiB/s-5171KiB/s (2026kB/s-5295kB/s), io=11.1MiB (11.6MB), run=1001-1035msec 00:19:23.147 00:19:23.147 Disk stats (read/write): 00:19:23.147 nvme0n1: ios=41/512, merge=0/0, ticks=1569/234, in_queue=1803, util=97.60% 00:19:23.147 nvme0n2: ios=53/512, merge=0/0, ticks=707/269, in_queue=976, util=97.76% 00:19:23.147 nvme0n3: ios=74/512, merge=0/0, ticks=864/162, in_queue=1026, util=97.48% 00:19:23.147 nvme0n4: ios=871/1024, merge=0/0, ticks=439/359, in_queue=798, util=89.62% 00:19:23.147 01:54:08 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:19:23.147 [global] 00:19:23.147 thread=1 00:19:23.147 invalidate=1 00:19:23.147 rw=write 00:19:23.147 time_based=1 00:19:23.147 runtime=1 00:19:23.147 ioengine=libaio 00:19:23.147 direct=1 00:19:23.147 bs=4096 00:19:23.147 iodepth=128 00:19:23.147 norandommap=0 00:19:23.147 numjobs=1 00:19:23.147 00:19:23.147 verify_dump=1 00:19:23.147 verify_backlog=512 00:19:23.147 verify_state_save=0 00:19:23.147 do_verify=1 00:19:23.147 verify=crc32c-intel 00:19:23.147 [job0] 00:19:23.147 filename=/dev/nvme0n1 00:19:23.147 [job1] 00:19:23.147 filename=/dev/nvme0n2 00:19:23.147 [job2] 00:19:23.147 filename=/dev/nvme0n3 00:19:23.147 [job3] 00:19:23.147 filename=/dev/nvme0n4 00:19:23.147 Could not set queue depth (nvme0n1) 00:19:23.147 Could not set queue depth (nvme0n2) 00:19:23.147 Could not set queue depth (nvme0n3) 00:19:23.147 Could not set queue depth (nvme0n4) 00:19:23.147 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.147 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.147 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.147 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:23.147 fio-3.35 00:19:23.147 Starting 4 threads 00:19:24.534 00:19:24.534 job0: (groupid=0, jobs=1): err= 0: pid=2175710: Mon Apr 15 01:54:09 2024 00:19:24.534 read: IOPS=3372, BW=13.2MiB/s (13.8MB/s)(13.2MiB/1003msec) 00:19:24.534 slat (usec): min=3, max=23880, avg=144.29, stdev=967.86 00:19:24.534 clat (usec): min=1006, max=54049, avg=18555.59, stdev=7903.43 00:19:24.534 lat (usec): min=1565, max=54064, avg=18699.88, stdev=7959.12 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 2835], 5.00th=[ 7832], 10.00th=[ 9896], 20.00th=[12125], 00:19:24.534 | 30.00th=[13566], 40.00th=[16581], 50.00th=[18220], 60.00th=[18744], 00:19:24.534 | 70.00th=[19792], 80.00th=[21890], 90.00th=[29754], 95.00th=[34866], 00:19:24.534 | 99.00th=[43254], 99.50th=[45876], 99.90th=[45876], 99.95th=[46924], 00:19:24.534 | 99.99th=[54264] 00:19:24.534 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:24.534 slat (usec): min=4, max=18869, avg=134.03, stdev=774.26 00:19:24.534 clat (usec): min=3304, max=37064, avg=17842.11, stdev=7613.64 00:19:24.534 lat (usec): min=3310, max=37828, avg=17976.14, stdev=7662.18 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 5997], 5.00th=[ 7177], 10.00th=[ 8291], 20.00th=[10290], 00:19:24.534 | 30.00th=[13173], 40.00th=[15270], 50.00th=[15664], 60.00th=[17695], 00:19:24.534 | 70.00th=[22676], 80.00th=[25035], 90.00th=[28705], 95.00th=[31327], 00:19:24.534 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36963], 99.95th=[36963], 00:19:24.534 | 99.99th=[36963] 00:19:24.534 bw ( KiB/s): min=12288, max=16384, per=26.87%, avg=14336.00, stdev=2896.31, samples=2 00:19:24.534 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:19:24.534 lat (msec) : 2=0.22%, 4=0.52%, 10=13.58%, 20=52.49%, 50=33.19% 00:19:24.534 lat (msec) : 100=0.01% 00:19:24.534 cpu : usr=4.29%, sys=5.89%, ctx=297, majf=0, minf=1 00:19:24.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:24.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.534 issued rwts: total=3383,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.534 job1: (groupid=0, jobs=1): err= 0: pid=2175711: Mon Apr 15 01:54:09 2024 00:19:24.534 read: IOPS=3297, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1003msec) 00:19:24.534 slat (usec): min=2, max=15141, avg=133.38, stdev=935.82 00:19:24.534 clat (usec): min=896, max=85720, avg=17876.31, stdev=12278.20 00:19:24.534 lat (usec): min=910, max=85724, avg=18009.69, stdev=12360.09 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 1893], 5.00th=[ 4752], 10.00th=[ 8979], 20.00th=[11076], 00:19:24.534 | 30.00th=[12387], 40.00th=[13829], 50.00th=[14746], 60.00th=[15795], 00:19:24.534 | 70.00th=[18482], 80.00th=[21890], 90.00th=[28443], 95.00th=[47449], 00:19:24.534 | 99.00th=[70779], 99.50th=[71828], 99.90th=[78119], 99.95th=[85459], 00:19:24.534 | 99.99th=[85459] 00:19:24.534 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:19:24.534 slat (usec): min=3, max=22680, avg=131.30, stdev=801.53 00:19:24.534 clat (usec): min=868, max=65273, avg=18780.75, stdev=10608.11 00:19:24.534 lat (usec): min=875, max=65290, avg=18912.05, stdev=10655.03 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 1893], 5.00th=[ 4080], 10.00th=[ 6390], 20.00th=[10421], 00:19:24.534 | 30.00th=[13173], 40.00th=[15139], 50.00th=[17171], 60.00th=[20055], 00:19:24.534 | 70.00th=[23462], 80.00th=[26084], 90.00th=[29754], 95.00th=[38536], 00:19:24.534 | 99.00th=[59507], 99.50th=[60031], 99.90th=[64226], 99.95th=[64226], 00:19:24.534 | 99.99th=[65274] 00:19:24.534 bw ( KiB/s): min=13680, max=14992, per=26.87%, avg=14336.00, stdev=927.72, samples=2 00:19:24.534 iops : min= 3420, max= 3748, avg=3584.00, stdev=231.93, samples=2 00:19:24.534 lat (usec) : 1000=0.17% 00:19:24.534 lat (msec) : 2=1.04%, 4=3.35%, 10=12.10%, 20=50.59%, 50=29.42% 00:19:24.534 lat (msec) : 100=3.32% 00:19:24.534 cpu : usr=4.09%, sys=6.19%, ctx=464, majf=0, minf=1 00:19:24.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:24.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.534 issued rwts: total=3307,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.534 job2: (groupid=0, jobs=1): err= 0: pid=2175712: Mon Apr 15 01:54:09 2024 00:19:24.534 read: IOPS=2297, BW=9191KiB/s (9412kB/s)(9228KiB/1004msec) 00:19:24.534 slat (usec): min=2, max=22027, avg=187.00, stdev=1119.02 00:19:24.534 clat (usec): min=3256, max=71679, avg=22612.10, stdev=10956.52 00:19:24.534 lat (usec): min=3265, max=71684, avg=22799.10, stdev=11036.67 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 3621], 5.00th=[ 9503], 10.00th=[12387], 20.00th=[13698], 00:19:24.534 | 30.00th=[15270], 40.00th=[17957], 50.00th=[21627], 60.00th=[24249], 00:19:24.534 | 70.00th=[26870], 80.00th=[29492], 90.00th=[33817], 95.00th=[38011], 00:19:24.534 | 99.00th=[71828], 99.50th=[71828], 99.90th=[71828], 99.95th=[71828], 00:19:24.534 | 99.99th=[71828] 00:19:24.534 write: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec); 0 zone resets 00:19:24.534 slat (usec): min=3, max=29469, avg=215.24, stdev=1205.28 00:19:24.534 clat (usec): min=1975, max=84584, avg=29371.00, stdev=19812.30 00:19:24.534 lat (usec): min=1982, max=84590, avg=29586.24, stdev=19909.63 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 8225], 5.00th=[10683], 10.00th=[12387], 20.00th=[15664], 00:19:24.534 | 30.00th=[18482], 40.00th=[19530], 50.00th=[21103], 60.00th=[23725], 00:19:24.534 | 70.00th=[27657], 80.00th=[43254], 90.00th=[66323], 95.00th=[77071], 00:19:24.534 | 99.00th=[81265], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:19:24.534 | 99.99th=[84411] 00:19:24.534 bw ( KiB/s): min= 8192, max=12288, per=19.19%, avg=10240.00, stdev=2896.31, samples=2 00:19:24.534 iops : min= 2048, max= 3072, avg=2560.00, stdev=724.08, samples=2 00:19:24.534 lat (msec) : 2=0.16%, 4=0.99%, 10=3.14%, 20=40.64%, 50=44.54% 00:19:24.534 lat (msec) : 100=10.52% 00:19:24.534 cpu : usr=2.09%, sys=3.49%, ctx=317, majf=0, minf=1 00:19:24.534 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:19:24.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.534 issued rwts: total=2307,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.534 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.534 job3: (groupid=0, jobs=1): err= 0: pid=2175713: Mon Apr 15 01:54:09 2024 00:19:24.534 read: IOPS=3538, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1013msec) 00:19:24.534 slat (usec): min=2, max=15417, avg=113.24, stdev=681.58 00:19:24.534 clat (usec): min=6724, max=65528, avg=14036.02, stdev=5486.27 00:19:24.534 lat (usec): min=6730, max=65531, avg=14149.26, stdev=5507.28 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 6980], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11207], 00:19:24.534 | 30.00th=[11731], 40.00th=[12256], 50.00th=[13304], 60.00th=[13960], 00:19:24.534 | 70.00th=[14615], 80.00th=[16450], 90.00th=[17957], 95.00th=[19268], 00:19:24.534 | 99.00th=[24249], 99.50th=[59507], 99.90th=[59507], 99.95th=[65274], 00:19:24.534 | 99.99th=[65274] 00:19:24.534 write: IOPS=3734, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1013msec); 0 zone resets 00:19:24.534 slat (usec): min=3, max=25464, avg=150.21, stdev=943.85 00:19:24.534 clat (usec): min=6200, max=64802, avg=20379.96, stdev=9154.19 00:19:24.534 lat (usec): min=6209, max=64813, avg=20530.18, stdev=9213.02 00:19:24.534 clat percentiles (usec): 00:19:24.534 | 1.00th=[ 8029], 5.00th=[ 9241], 10.00th=[10945], 20.00th=[12125], 00:19:24.535 | 30.00th=[13698], 40.00th=[15795], 50.00th=[18482], 60.00th=[22414], 00:19:24.535 | 70.00th=[24511], 80.00th=[27132], 90.00th=[32637], 95.00th=[37487], 00:19:24.535 | 99.00th=[49546], 99.50th=[52167], 99.90th=[54264], 99.95th=[60031], 00:19:24.535 | 99.99th=[64750] 00:19:24.535 bw ( KiB/s): min=12288, max=16960, per=27.41%, avg=14624.00, stdev=3303.60, samples=2 00:19:24.535 iops : min= 3072, max= 4240, avg=3656.00, stdev=825.90, samples=2 00:19:24.535 lat (msec) : 10=8.50%, 20=65.48%, 50=25.22%, 100=0.80% 00:19:24.535 cpu : usr=3.16%, sys=4.64%, ctx=533, majf=0, minf=1 00:19:24.535 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:19:24.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:24.535 issued rwts: total=3584,3783,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.535 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:24.535 00:19:24.535 Run status group 0 (all jobs): 00:19:24.535 READ: bw=48.5MiB/s (50.9MB/s), 9191KiB/s-13.8MiB/s (9412kB/s-14.5MB/s), io=49.1MiB (51.5MB), run=1003-1013msec 00:19:24.535 WRITE: bw=52.1MiB/s (54.6MB/s), 9.96MiB/s-14.6MiB/s (10.4MB/s-15.3MB/s), io=52.8MiB (55.3MB), run=1003-1013msec 00:19:24.535 00:19:24.535 Disk stats (read/write): 00:19:24.535 nvme0n1: ios=3124/3096, merge=0/0, ticks=35098/28587, in_queue=63685, util=97.70% 00:19:24.535 nvme0n2: ios=2601/2919, merge=0/0, ticks=30287/43480, in_queue=73767, util=99.80% 00:19:24.535 nvme0n3: ios=1832/2048, merge=0/0, ticks=19974/28391, in_queue=48365, util=96.85% 00:19:24.535 nvme0n4: ios=3096/3175, merge=0/0, ticks=21569/28460, in_queue=50029, util=97.25% 00:19:24.535 01:54:09 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:24.535 [global] 00:19:24.535 thread=1 00:19:24.535 invalidate=1 00:19:24.535 rw=randwrite 00:19:24.535 time_based=1 00:19:24.535 runtime=1 00:19:24.535 ioengine=libaio 00:19:24.535 direct=1 00:19:24.535 bs=4096 00:19:24.535 iodepth=128 00:19:24.535 norandommap=0 00:19:24.535 numjobs=1 00:19:24.535 00:19:24.535 verify_dump=1 00:19:24.535 verify_backlog=512 00:19:24.535 verify_state_save=0 00:19:24.535 do_verify=1 00:19:24.535 verify=crc32c-intel 00:19:24.535 [job0] 00:19:24.535 filename=/dev/nvme0n1 00:19:24.535 [job1] 00:19:24.535 filename=/dev/nvme0n2 00:19:24.535 [job2] 00:19:24.535 filename=/dev/nvme0n3 00:19:24.535 [job3] 00:19:24.535 filename=/dev/nvme0n4 00:19:24.535 Could not set queue depth (nvme0n1) 00:19:24.535 Could not set queue depth (nvme0n2) 00:19:24.535 Could not set queue depth (nvme0n3) 00:19:24.535 Could not set queue depth (nvme0n4) 00:19:24.535 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.535 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.535 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.535 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:24.535 fio-3.35 00:19:24.535 Starting 4 threads 00:19:25.911 00:19:25.911 job0: (groupid=0, jobs=1): err= 0: pid=2175990: Mon Apr 15 01:54:11 2024 00:19:25.911 read: IOPS=4952, BW=19.3MiB/s (20.3MB/s)(19.5MiB/1007msec) 00:19:25.911 slat (usec): min=3, max=11978, avg=94.27, stdev=617.63 00:19:25.911 clat (usec): min=2560, max=25253, avg=11808.77, stdev=3703.28 00:19:25.911 lat (usec): min=5824, max=25263, avg=11903.03, stdev=3729.07 00:19:25.911 clat percentiles (usec): 00:19:25.911 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[ 8586], 20.00th=[ 9110], 00:19:25.911 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10683], 60.00th=[11207], 00:19:25.911 | 70.00th=[12387], 80.00th=[14091], 90.00th=[17957], 95.00th=[20055], 00:19:25.911 | 99.00th=[22414], 99.50th=[23462], 99.90th=[25297], 99.95th=[25297], 00:19:25.911 | 99.99th=[25297] 00:19:25.911 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:19:25.911 slat (usec): min=4, max=9117, avg=95.83, stdev=507.50 00:19:25.911 clat (usec): min=2426, max=25247, avg=13424.68, stdev=3898.96 00:19:25.911 lat (usec): min=3178, max=25254, avg=13520.51, stdev=3928.99 00:19:25.911 clat percentiles (usec): 00:19:25.911 | 1.00th=[ 4424], 5.00th=[ 5866], 10.00th=[ 7111], 20.00th=[ 9503], 00:19:25.911 | 30.00th=[11469], 40.00th=[13698], 50.00th=[14877], 60.00th=[15533], 00:19:25.911 | 70.00th=[16057], 80.00th=[16581], 90.00th=[17171], 95.00th=[17957], 00:19:25.911 | 99.00th=[19530], 99.50th=[19792], 99.90th=[22152], 99.95th=[23462], 00:19:25.911 | 99.99th=[25297] 00:19:25.911 bw ( KiB/s): min=20480, max=20480, per=30.84%, avg=20480.00, stdev= 0.00, samples=2 00:19:25.911 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:19:25.911 lat (msec) : 4=0.21%, 10=29.22%, 20=67.51%, 50=3.07% 00:19:25.911 cpu : usr=5.07%, sys=8.75%, ctx=443, majf=0, minf=1 00:19:25.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:25.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.911 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.911 job1: (groupid=0, jobs=1): err= 0: pid=2175991: Mon Apr 15 01:54:11 2024 00:19:25.911 read: IOPS=4601, BW=18.0MiB/s (18.8MB/s)(18.2MiB/1012msec) 00:19:25.911 slat (usec): min=3, max=8851, avg=94.22, stdev=575.83 00:19:25.911 clat (usec): min=5746, max=25425, avg=11582.94, stdev=3853.05 00:19:25.911 lat (usec): min=5751, max=25439, avg=11677.16, stdev=3879.64 00:19:25.911 clat percentiles (usec): 00:19:25.911 | 1.00th=[ 6390], 5.00th=[ 7046], 10.00th=[ 8356], 20.00th=[ 8979], 00:19:25.911 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10290], 60.00th=[10683], 00:19:25.911 | 70.00th=[11469], 80.00th=[14091], 90.00th=[17957], 95.00th=[20579], 00:19:25.911 | 99.00th=[22938], 99.50th=[22938], 99.90th=[23725], 99.95th=[25297], 00:19:25.911 | 99.99th=[25297] 00:19:25.911 write: IOPS=5059, BW=19.8MiB/s (20.7MB/s)(20.0MiB/1012msec); 0 zone resets 00:19:25.911 slat (usec): min=3, max=20160, avg=101.81, stdev=653.01 00:19:25.911 clat (usec): min=2573, max=54850, avg=14329.56, stdev=6279.69 00:19:25.911 lat (usec): min=3265, max=54929, avg=14431.37, stdev=6326.27 00:19:25.911 clat percentiles (usec): 00:19:25.911 | 1.00th=[ 3785], 5.00th=[ 5276], 10.00th=[ 6915], 20.00th=[ 8979], 00:19:25.911 | 30.00th=[12256], 40.00th=[13829], 50.00th=[14746], 60.00th=[15926], 00:19:25.911 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17433], 95.00th=[26084], 00:19:25.911 | 99.00th=[38536], 99.50th=[38536], 99.90th=[38536], 99.95th=[39060], 00:19:25.911 | 99.99th=[54789] 00:19:25.911 bw ( KiB/s): min=19864, max=20464, per=30.37%, avg=20164.00, stdev=424.26, samples=2 00:19:25.911 iops : min= 4966, max= 5116, avg=5041.00, stdev=106.07, samples=2 00:19:25.911 lat (msec) : 4=0.70%, 10=32.98%, 20=60.03%, 50=6.29%, 100=0.01% 00:19:25.911 cpu : usr=5.04%, sys=8.01%, ctx=451, majf=0, minf=1 00:19:25.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:25.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.911 issued rwts: total=4657,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.911 job2: (groupid=0, jobs=1): err= 0: pid=2175992: Mon Apr 15 01:54:11 2024 00:19:25.911 read: IOPS=1517, BW=6071KiB/s (6217kB/s)(6144KiB/1012msec) 00:19:25.911 slat (usec): min=2, max=58601, avg=307.78, stdev=2596.93 00:19:25.911 clat (usec): min=15627, max=89605, avg=40638.47, stdev=17989.23 00:19:25.911 lat (usec): min=16083, max=89610, avg=40946.25, stdev=18079.28 00:19:25.911 clat percentiles (usec): 00:19:25.911 | 1.00th=[17957], 5.00th=[22414], 10.00th=[23987], 20.00th=[27132], 00:19:25.911 | 30.00th=[30016], 40.00th=[33162], 50.00th=[34866], 60.00th=[38011], 00:19:25.912 | 70.00th=[40109], 80.00th=[47449], 90.00th=[74974], 95.00th=[80217], 00:19:25.912 | 99.00th=[89654], 99.50th=[89654], 99.90th=[89654], 99.95th=[89654], 00:19:25.912 | 99.99th=[89654] 00:19:25.912 write: IOPS=1927, BW=7711KiB/s (7897kB/s)(7804KiB/1012msec); 0 zone resets 00:19:25.912 slat (usec): min=3, max=73026, avg=267.28, stdev=2099.87 00:19:25.912 clat (usec): min=6120, max=89947, avg=33596.42, stdev=19149.11 00:19:25.912 lat (usec): min=13200, max=89965, avg=33863.70, stdev=19243.67 00:19:25.912 clat percentiles (usec): 00:19:25.912 | 1.00th=[14615], 5.00th=[16712], 10.00th=[18482], 20.00th=[20055], 00:19:25.912 | 30.00th=[22414], 40.00th=[23462], 50.00th=[25560], 60.00th=[30278], 00:19:25.912 | 70.00th=[33162], 80.00th=[41157], 90.00th=[69731], 95.00th=[81265], 00:19:25.912 | 99.00th=[88605], 99.50th=[88605], 99.90th=[89654], 99.95th=[89654], 00:19:25.912 | 99.99th=[89654] 00:19:25.912 bw ( KiB/s): min= 6392, max= 8192, per=10.98%, avg=7292.00, stdev=1272.79, samples=2 00:19:25.912 iops : min= 1598, max= 2048, avg=1823.00, stdev=318.20, samples=2 00:19:25.912 lat (msec) : 10=0.03%, 20=11.47%, 50=71.81%, 100=16.69% 00:19:25.912 cpu : usr=0.40%, sys=2.87%, ctx=153, majf=0, minf=1 00:19:25.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:19:25.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.912 issued rwts: total=1536,1951,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.912 job3: (groupid=0, jobs=1): err= 0: pid=2175993: Mon Apr 15 01:54:11 2024 00:19:25.912 read: IOPS=4177, BW=16.3MiB/s (17.1MB/s)(16.5MiB/1011msec) 00:19:25.912 slat (usec): min=3, max=13280, avg=121.44, stdev=816.29 00:19:25.912 clat (usec): min=3671, max=32292, avg=15798.01, stdev=4405.81 00:19:25.912 lat (usec): min=7255, max=32314, avg=15919.45, stdev=4437.74 00:19:25.912 clat percentiles (usec): 00:19:25.912 | 1.00th=[ 9372], 5.00th=[10159], 10.00th=[10683], 20.00th=[11863], 00:19:25.912 | 30.00th=[12387], 40.00th=[13698], 50.00th=[15401], 60.00th=[16188], 00:19:25.912 | 70.00th=[17957], 80.00th=[19530], 90.00th=[21365], 95.00th=[24249], 00:19:25.912 | 99.00th=[28967], 99.50th=[30540], 99.90th=[31589], 99.95th=[32113], 00:19:25.912 | 99.99th=[32375] 00:19:25.912 write: IOPS=4557, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1011msec); 0 zone resets 00:19:25.912 slat (usec): min=4, max=13175, avg=94.98, stdev=625.76 00:19:25.912 clat (usec): min=2525, max=31509, avg=13310.24, stdev=3815.18 00:19:25.912 lat (usec): min=2543, max=31517, avg=13405.23, stdev=3817.01 00:19:25.912 clat percentiles (usec): 00:19:25.912 | 1.00th=[ 5211], 5.00th=[ 6915], 10.00th=[ 7963], 20.00th=[ 9896], 00:19:25.912 | 30.00th=[11469], 40.00th=[12780], 50.00th=[13698], 60.00th=[14615], 00:19:25.912 | 70.00th=[15533], 80.00th=[16319], 90.00th=[17171], 95.00th=[18482], 00:19:25.912 | 99.00th=[23200], 99.50th=[26346], 99.90th=[26608], 99.95th=[26608], 00:19:25.912 | 99.99th=[31589] 00:19:25.912 bw ( KiB/s): min=17984, max=18872, per=27.75%, avg=18428.00, stdev=627.91, samples=2 00:19:25.912 iops : min= 4496, max= 4718, avg=4607.00, stdev=156.98, samples=2 00:19:25.912 lat (msec) : 4=0.22%, 10=12.58%, 20=76.73%, 50=10.47% 00:19:25.912 cpu : usr=4.75%, sys=8.02%, ctx=284, majf=0, minf=1 00:19:25.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:19:25.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:25.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:25.912 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:25.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:25.912 00:19:25.912 Run status group 0 (all jobs): 00:19:25.912 READ: bw=59.5MiB/s (62.3MB/s), 6071KiB/s-19.3MiB/s (6217kB/s-20.3MB/s), io=60.2MiB (63.1MB), run=1007-1012msec 00:19:25.912 WRITE: bw=64.8MiB/s (68.0MB/s), 7711KiB/s-19.9MiB/s (7897kB/s-20.8MB/s), io=65.6MiB (68.8MB), run=1007-1012msec 00:19:25.912 00:19:25.912 Disk stats (read/write): 00:19:25.912 nvme0n1: ios=4122/4379, merge=0/0, ticks=48348/56736, in_queue=105084, util=98.00% 00:19:25.912 nvme0n2: ios=4132/4099, merge=0/0, ticks=46338/53848, in_queue=100186, util=91.05% 00:19:25.912 nvme0n3: ios=1584/1551, merge=0/0, ticks=27609/19078, in_queue=46687, util=93.52% 00:19:25.912 nvme0n4: ios=3642/3701, merge=0/0, ticks=56106/48277, in_queue=104383, util=98.00% 00:19:25.912 01:54:11 -- target/fio.sh@55 -- # sync 00:19:25.912 01:54:11 -- target/fio.sh@59 -- # fio_pid=2176135 00:19:25.912 01:54:11 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:25.912 01:54:11 -- target/fio.sh@61 -- # sleep 3 00:19:25.912 [global] 00:19:25.912 thread=1 00:19:25.912 invalidate=1 00:19:25.912 rw=read 00:19:25.912 time_based=1 00:19:25.912 runtime=10 00:19:25.912 ioengine=libaio 00:19:25.912 direct=1 00:19:25.912 bs=4096 00:19:25.912 iodepth=1 00:19:25.912 norandommap=1 00:19:25.912 numjobs=1 00:19:25.912 00:19:25.912 [job0] 00:19:25.912 filename=/dev/nvme0n1 00:19:25.912 [job1] 00:19:25.912 filename=/dev/nvme0n2 00:19:25.912 [job2] 00:19:25.912 filename=/dev/nvme0n3 00:19:25.912 [job3] 00:19:25.912 filename=/dev/nvme0n4 00:19:25.912 Could not set queue depth (nvme0n1) 00:19:25.912 Could not set queue depth (nvme0n2) 00:19:25.912 Could not set queue depth (nvme0n3) 00:19:25.912 Could not set queue depth (nvme0n4) 00:19:26.170 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.170 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.170 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.170 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:26.170 fio-3.35 00:19:26.170 Starting 4 threads 00:19:29.485 01:54:14 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:29.485 01:54:14 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:29.485 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=286720, buflen=4096 00:19:29.485 fio: pid=2176268, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.485 01:54:14 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.485 01:54:14 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:29.485 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=12447744, buflen=4096 00:19:29.485 fio: pid=2176253, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.753 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=389120, buflen=4096 00:19:29.753 fio: pid=2176229, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:29.753 01:54:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:29.753 01:54:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:30.011 01:54:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.011 01:54:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:30.011 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=2777088, buflen=4096 00:19:30.011 fio: pid=2176230, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:30.011 00:19:30.011 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2176229: Mon Apr 15 01:54:15 2024 00:19:30.011 read: IOPS=28, BW=113KiB/s (115kB/s)(380KiB/3375msec) 00:19:30.011 slat (usec): min=12, max=8768, avg=185.76, stdev=1123.50 00:19:30.011 clat (usec): min=411, max=44998, avg=35069.50, stdev=14451.18 00:19:30.011 lat (usec): min=435, max=49972, avg=35257.05, stdev=14563.17 00:19:30.011 clat percentiles (usec): 00:19:30.011 | 1.00th=[ 412], 5.00th=[ 465], 10.00th=[ 498], 20.00th=[41157], 00:19:30.011 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:30.011 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:30.011 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:19:30.011 | 99.99th=[44827] 00:19:30.011 bw ( KiB/s): min= 96, max= 208, per=2.69%, avg=114.67, stdev=45.72, samples=6 00:19:30.011 iops : min= 24, max= 52, avg=28.67, stdev=11.43, samples=6 00:19:30.011 lat (usec) : 500=11.46%, 750=2.08%, 1000=1.04% 00:19:30.011 lat (msec) : 50=84.38% 00:19:30.011 cpu : usr=0.03%, sys=0.09%, ctx=99, majf=0, minf=1 00:19:30.011 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.011 complete : 0=1.0%, 4=99.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.011 issued rwts: total=96,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.011 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.012 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2176230: Mon Apr 15 01:54:15 2024 00:19:30.012 read: IOPS=185, BW=741KiB/s (759kB/s)(2712KiB/3659msec) 00:19:30.012 slat (usec): min=5, max=3735, avg=25.00, stdev=177.64 00:19:30.012 clat (usec): min=413, max=41363, avg=5334.45, stdev=13019.90 00:19:30.012 lat (usec): min=425, max=44984, avg=5359.47, stdev=13049.48 00:19:30.012 clat percentiles (usec): 00:19:30.012 | 1.00th=[ 449], 5.00th=[ 482], 10.00th=[ 515], 20.00th=[ 537], 00:19:30.012 | 30.00th=[ 545], 40.00th=[ 553], 50.00th=[ 562], 60.00th=[ 578], 00:19:30.012 | 70.00th=[ 603], 80.00th=[ 660], 90.00th=[41157], 95.00th=[41157], 00:19:30.012 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.012 | 99.99th=[41157] 00:19:30.012 bw ( KiB/s): min= 96, max= 3528, per=17.72%, avg=753.00, stdev=1284.29, samples=7 00:19:30.012 iops : min= 24, max= 882, avg=188.14, stdev=321.03, samples=7 00:19:30.012 lat (usec) : 500=7.22%, 750=78.94%, 1000=1.47% 00:19:30.012 lat (msec) : 2=0.29%, 10=0.15%, 50=11.78% 00:19:30.012 cpu : usr=0.08%, sys=0.36%, ctx=682, majf=0, minf=1 00:19:30.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 issued rwts: total=679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.012 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2176253: Mon Apr 15 01:54:15 2024 00:19:30.012 read: IOPS=975, BW=3900KiB/s (3994kB/s)(11.9MiB/3117msec) 00:19:30.012 slat (nsec): min=4942, max=77559, avg=19125.26, stdev=9986.78 00:19:30.012 clat (usec): min=393, max=41548, avg=994.25, stdev=4440.30 00:19:30.012 lat (usec): min=406, max=41561, avg=1013.37, stdev=4441.03 00:19:30.012 clat percentiles (usec): 00:19:30.012 | 1.00th=[ 408], 5.00th=[ 433], 10.00th=[ 441], 20.00th=[ 449], 00:19:30.012 | 30.00th=[ 457], 40.00th=[ 469], 50.00th=[ 486], 60.00th=[ 498], 00:19:30.012 | 70.00th=[ 529], 80.00th=[ 553], 90.00th=[ 611], 95.00th=[ 644], 00:19:30.012 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:30.012 | 99.99th=[41681] 00:19:30.012 bw ( KiB/s): min= 96, max= 7672, per=92.89%, avg=3942.67, stdev=3319.22, samples=6 00:19:30.012 iops : min= 24, max= 1918, avg=985.67, stdev=829.81, samples=6 00:19:30.012 lat (usec) : 500=60.69%, 750=37.80%, 1000=0.26% 00:19:30.012 lat (msec) : 50=1.22% 00:19:30.012 cpu : usr=0.87%, sys=2.02%, ctx=3042, majf=0, minf=1 00:19:30.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 issued rwts: total=3040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.012 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2176268: Mon Apr 15 01:54:15 2024 00:19:30.012 read: IOPS=24, BW=97.7KiB/s (100kB/s)(280KiB/2867msec) 00:19:30.012 slat (nsec): min=12285, max=43563, avg=24690.44, stdev=10149.38 00:19:30.012 clat (usec): min=650, max=43004, avg=40611.64, stdev=6904.22 00:19:30.012 lat (usec): min=667, max=43020, avg=40636.47, stdev=6904.25 00:19:30.012 clat percentiles (usec): 00:19:30.012 | 1.00th=[ 652], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:30.012 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:19:30.012 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:19:30.012 | 99.00th=[43254], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:19:30.012 | 99.99th=[43254] 00:19:30.012 bw ( KiB/s): min= 96, max= 96, per=2.26%, avg=96.00, stdev= 0.00, samples=5 00:19:30.012 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:19:30.012 lat (usec) : 750=1.41%, 1000=1.41% 00:19:30.012 lat (msec) : 50=95.77% 00:19:30.012 cpu : usr=0.14%, sys=0.00%, ctx=73, majf=0, minf=1 00:19:30.012 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:30.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:30.012 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:30.012 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:30.012 00:19:30.012 Run status group 0 (all jobs): 00:19:30.012 READ: bw=4244KiB/s (4346kB/s), 97.7KiB/s-3900KiB/s (100kB/s-3994kB/s), io=15.2MiB (15.9MB), run=2867-3659msec 00:19:30.012 00:19:30.012 Disk stats (read/write): 00:19:30.012 nvme0n1: ios=135/0, merge=0/0, ticks=4372/0, in_queue=4372, util=99.20% 00:19:30.012 nvme0n2: ios=676/0, merge=0/0, ticks=3532/0, in_queue=3532, util=95.62% 00:19:30.012 nvme0n3: ios=3087/0, merge=0/0, ticks=3362/0, in_queue=3362, util=99.78% 00:19:30.012 nvme0n4: ios=121/0, merge=0/0, ticks=3969/0, in_queue=3969, util=100.00% 00:19:30.270 01:54:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.270 01:54:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:30.528 01:54:15 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.528 01:54:15 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:30.786 01:54:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:30.786 01:54:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:31.045 01:54:16 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:31.045 01:54:16 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:31.303 01:54:16 -- target/fio.sh@69 -- # fio_status=0 00:19:31.303 01:54:16 -- target/fio.sh@70 -- # wait 2176135 00:19:31.303 01:54:16 -- target/fio.sh@70 -- # fio_status=4 00:19:31.303 01:54:16 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:31.303 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:31.303 01:54:16 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:31.303 01:54:16 -- common/autotest_common.sh@1198 -- # local i=0 00:19:31.303 01:54:16 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:31.303 01:54:16 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.303 01:54:16 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:31.303 01:54:16 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:31.303 01:54:16 -- common/autotest_common.sh@1210 -- # return 0 00:19:31.303 01:54:16 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:31.303 01:54:16 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:31.303 nvmf hotplug test: fio failed as expected 00:19:31.303 01:54:16 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:31.561 01:54:17 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:31.561 01:54:17 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:31.561 01:54:17 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:31.561 01:54:17 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:31.561 01:54:17 -- target/fio.sh@91 -- # nvmftestfini 00:19:31.561 01:54:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:31.561 01:54:17 -- nvmf/common.sh@116 -- # sync 00:19:31.561 01:54:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:31.561 01:54:17 -- nvmf/common.sh@119 -- # set +e 00:19:31.561 01:54:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:31.561 01:54:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:31.561 rmmod nvme_tcp 00:19:31.561 rmmod nvme_fabrics 00:19:31.561 rmmod nvme_keyring 00:19:31.561 01:54:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:31.561 01:54:17 -- nvmf/common.sh@123 -- # set -e 00:19:31.561 01:54:17 -- nvmf/common.sh@124 -- # return 0 00:19:31.561 01:54:17 -- nvmf/common.sh@477 -- # '[' -n 2173437 ']' 00:19:31.561 01:54:17 -- nvmf/common.sh@478 -- # killprocess 2173437 00:19:31.561 01:54:17 -- common/autotest_common.sh@926 -- # '[' -z 2173437 ']' 00:19:31.561 01:54:17 -- common/autotest_common.sh@930 -- # kill -0 2173437 00:19:31.561 01:54:17 -- common/autotest_common.sh@931 -- # uname 00:19:31.561 01:54:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:31.561 01:54:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2173437 00:19:31.821 01:54:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:31.821 01:54:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:31.821 01:54:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2173437' 00:19:31.821 killing process with pid 2173437 00:19:31.821 01:54:17 -- common/autotest_common.sh@945 -- # kill 2173437 00:19:31.821 01:54:17 -- common/autotest_common.sh@950 -- # wait 2173437 00:19:31.821 01:54:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:31.821 01:54:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:31.821 01:54:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:31.821 01:54:17 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:31.821 01:54:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:31.821 01:54:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:31.821 01:54:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:31.821 01:54:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.362 01:54:19 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:34.362 00:19:34.362 real 0m23.979s 00:19:34.362 user 1m22.860s 00:19:34.362 sys 0m6.361s 00:19:34.362 01:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:34.362 01:54:19 -- common/autotest_common.sh@10 -- # set +x 00:19:34.362 ************************************ 00:19:34.362 END TEST nvmf_fio_target 00:19:34.362 ************************************ 00:19:34.362 01:54:19 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:34.362 01:54:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:34.362 01:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:34.362 01:54:19 -- common/autotest_common.sh@10 -- # set +x 00:19:34.362 ************************************ 00:19:34.362 START TEST nvmf_bdevio 00:19:34.362 ************************************ 00:19:34.362 01:54:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:34.362 * Looking for test storage... 00:19:34.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:34.362 01:54:19 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:34.362 01:54:19 -- nvmf/common.sh@7 -- # uname -s 00:19:34.362 01:54:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:34.362 01:54:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:34.362 01:54:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:34.362 01:54:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:34.362 01:54:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:34.362 01:54:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:34.362 01:54:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:34.362 01:54:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:34.362 01:54:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:34.362 01:54:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:34.362 01:54:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.362 01:54:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:34.362 01:54:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:34.362 01:54:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:34.362 01:54:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:34.362 01:54:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:34.362 01:54:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:34.362 01:54:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:34.362 01:54:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:34.362 01:54:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.362 01:54:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.362 01:54:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.362 01:54:19 -- paths/export.sh@5 -- # export PATH 00:19:34.362 01:54:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:34.362 01:54:19 -- nvmf/common.sh@46 -- # : 0 00:19:34.362 01:54:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:34.362 01:54:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:34.362 01:54:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:34.362 01:54:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:34.362 01:54:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:34.362 01:54:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:34.362 01:54:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:34.362 01:54:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:34.362 01:54:19 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:34.362 01:54:19 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:34.362 01:54:19 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:34.362 01:54:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:34.362 01:54:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:34.362 01:54:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:34.362 01:54:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:34.362 01:54:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:34.362 01:54:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.362 01:54:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:34.362 01:54:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:34.362 01:54:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:34.362 01:54:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:34.362 01:54:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:34.362 01:54:19 -- common/autotest_common.sh@10 -- # set +x 00:19:36.268 01:54:21 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:36.268 01:54:21 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:36.268 01:54:21 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:36.268 01:54:21 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:36.268 01:54:21 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:36.268 01:54:21 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:36.268 01:54:21 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:36.268 01:54:21 -- nvmf/common.sh@294 -- # net_devs=() 00:19:36.268 01:54:21 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:36.268 01:54:21 -- nvmf/common.sh@295 -- # e810=() 00:19:36.268 01:54:21 -- nvmf/common.sh@295 -- # local -ga e810 00:19:36.268 01:54:21 -- nvmf/common.sh@296 -- # x722=() 00:19:36.268 01:54:21 -- nvmf/common.sh@296 -- # local -ga x722 00:19:36.268 01:54:21 -- nvmf/common.sh@297 -- # mlx=() 00:19:36.268 01:54:21 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:36.268 01:54:21 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:36.268 01:54:21 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:36.268 01:54:21 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:36.268 01:54:21 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:36.268 01:54:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.268 01:54:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:36.268 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:36.268 01:54:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:36.268 01:54:21 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:36.268 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:36.268 01:54:21 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:36.268 01:54:21 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:36.269 01:54:21 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.269 01:54:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.269 01:54:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.269 01:54:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.269 01:54:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:36.269 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:36.269 01:54:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.269 01:54:21 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:36.269 01:54:21 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:36.269 01:54:21 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:36.269 01:54:21 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:36.269 01:54:21 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:36.269 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:36.269 01:54:21 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:36.269 01:54:21 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:36.269 01:54:21 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:36.269 01:54:21 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:36.269 01:54:21 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:36.269 01:54:21 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:36.269 01:54:21 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:36.269 01:54:21 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:36.269 01:54:21 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:36.269 01:54:21 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:36.269 01:54:21 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:36.269 01:54:21 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:36.269 01:54:21 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:36.269 01:54:21 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:36.269 01:54:21 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:36.269 01:54:21 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:36.269 01:54:21 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:36.269 01:54:21 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:36.269 01:54:21 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:36.269 01:54:21 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:36.269 01:54:21 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:36.269 01:54:21 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:36.269 01:54:21 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:36.269 01:54:21 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:36.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:36.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:19:36.269 00:19:36.269 --- 10.0.0.2 ping statistics --- 00:19:36.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.269 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:19:36.269 01:54:21 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:36.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:36.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:19:36.269 00:19:36.269 --- 10.0.0.1 ping statistics --- 00:19:36.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:36.269 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:36.269 01:54:21 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:36.269 01:54:21 -- nvmf/common.sh@410 -- # return 0 00:19:36.269 01:54:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:36.269 01:54:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:36.269 01:54:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:36.269 01:54:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:36.269 01:54:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:36.269 01:54:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:36.269 01:54:21 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:36.269 01:54:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:36.269 01:54:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:36.269 01:54:21 -- common/autotest_common.sh@10 -- # set +x 00:19:36.269 01:54:21 -- nvmf/common.sh@469 -- # nvmfpid=2178887 00:19:36.269 01:54:21 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:36.269 01:54:21 -- nvmf/common.sh@470 -- # waitforlisten 2178887 00:19:36.269 01:54:21 -- common/autotest_common.sh@819 -- # '[' -z 2178887 ']' 00:19:36.269 01:54:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.269 01:54:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:36.269 01:54:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.269 01:54:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:36.269 01:54:21 -- common/autotest_common.sh@10 -- # set +x 00:19:36.269 [2024-04-15 01:54:21.794025] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:36.269 [2024-04-15 01:54:21.794103] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.269 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.269 [2024-04-15 01:54:21.856915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:36.528 [2024-04-15 01:54:21.947345] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:36.528 [2024-04-15 01:54:21.947498] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.528 [2024-04-15 01:54:21.947516] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.528 [2024-04-15 01:54:21.947529] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.528 [2024-04-15 01:54:21.947615] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:36.528 [2024-04-15 01:54:21.947708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:36.528 [2024-04-15 01:54:21.947776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:36.528 [2024-04-15 01:54:21.947779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:37.469 01:54:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:37.469 01:54:22 -- common/autotest_common.sh@852 -- # return 0 00:19:37.469 01:54:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:37.469 01:54:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 01:54:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.469 01:54:22 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:37.469 01:54:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 [2024-04-15 01:54:22.788723] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.469 01:54:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.469 01:54:22 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:37.469 01:54:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 Malloc0 00:19:37.469 01:54:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.469 01:54:22 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:37.469 01:54:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 01:54:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.469 01:54:22 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:37.469 01:54:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.469 01:54:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.469 01:54:22 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:37.469 01:54:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:37.469 01:54:22 -- common/autotest_common.sh@10 -- # set +x 00:19:37.470 [2024-04-15 01:54:22.839956] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.470 01:54:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:37.470 01:54:22 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:37.470 01:54:22 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:37.470 01:54:22 -- nvmf/common.sh@520 -- # config=() 00:19:37.470 01:54:22 -- nvmf/common.sh@520 -- # local subsystem config 00:19:37.470 01:54:22 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:37.470 01:54:22 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:37.470 { 00:19:37.470 "params": { 00:19:37.470 "name": "Nvme$subsystem", 00:19:37.470 "trtype": "$TEST_TRANSPORT", 00:19:37.470 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:37.470 "adrfam": "ipv4", 00:19:37.470 "trsvcid": "$NVMF_PORT", 00:19:37.470 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:37.470 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:37.470 "hdgst": ${hdgst:-false}, 00:19:37.470 "ddgst": ${ddgst:-false} 00:19:37.470 }, 00:19:37.470 "method": "bdev_nvme_attach_controller" 00:19:37.470 } 00:19:37.470 EOF 00:19:37.470 )") 00:19:37.470 01:54:22 -- nvmf/common.sh@542 -- # cat 00:19:37.470 01:54:22 -- nvmf/common.sh@544 -- # jq . 00:19:37.470 01:54:22 -- nvmf/common.sh@545 -- # IFS=, 00:19:37.470 01:54:22 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:37.470 "params": { 00:19:37.470 "name": "Nvme1", 00:19:37.470 "trtype": "tcp", 00:19:37.470 "traddr": "10.0.0.2", 00:19:37.470 "adrfam": "ipv4", 00:19:37.470 "trsvcid": "4420", 00:19:37.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.470 "hdgst": false, 00:19:37.470 "ddgst": false 00:19:37.470 }, 00:19:37.470 "method": "bdev_nvme_attach_controller" 00:19:37.470 }' 00:19:37.470 [2024-04-15 01:54:22.881067] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:37.470 [2024-04-15 01:54:22.881145] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179049 ] 00:19:37.470 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.470 [2024-04-15 01:54:22.942427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:37.470 [2024-04-15 01:54:23.028631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.470 [2024-04-15 01:54:23.028680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.470 [2024-04-15 01:54:23.028683] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:37.729 [2024-04-15 01:54:23.201350] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:37.729 [2024-04-15 01:54:23.201398] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:37.729 I/O targets: 00:19:37.729 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:37.729 00:19:37.729 00:19:37.729 CUnit - A unit testing framework for C - Version 2.1-3 00:19:37.729 http://cunit.sourceforge.net/ 00:19:37.729 00:19:37.729 00:19:37.729 Suite: bdevio tests on: Nvme1n1 00:19:37.729 Test: blockdev write read block ...passed 00:19:37.729 Test: blockdev write zeroes read block ...passed 00:19:37.729 Test: blockdev write zeroes read no split ...passed 00:19:37.729 Test: blockdev write zeroes read split ...passed 00:19:37.988 Test: blockdev write zeroes read split partial ...passed 00:19:37.988 Test: blockdev reset ...[2024-04-15 01:54:23.422958] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:37.988 [2024-04-15 01:54:23.423068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13c5e00 (9): Bad file descriptor 00:19:37.988 [2024-04-15 01:54:23.532319] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:37.988 passed 00:19:37.988 Test: blockdev write read 8 blocks ...passed 00:19:37.988 Test: blockdev write read size > 128k ...passed 00:19:37.988 Test: blockdev write read invalid size ...passed 00:19:37.988 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:37.988 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:37.988 Test: blockdev write read max offset ...passed 00:19:38.247 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:38.247 Test: blockdev writev readv 8 blocks ...passed 00:19:38.247 Test: blockdev writev readv 30 x 1block ...passed 00:19:38.247 Test: blockdev writev readv block ...passed 00:19:38.247 Test: blockdev writev readv size > 128k ...passed 00:19:38.247 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:38.247 Test: blockdev comparev and writev ...[2024-04-15 01:54:23.713253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.713295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.713320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.713337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.713734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.713759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.713781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.713798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.714204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.714228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.714250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.714267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.714667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.714690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.714712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:38.247 [2024-04-15 01:54:23.714728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:38.247 passed 00:19:38.247 Test: blockdev nvme passthru rw ...passed 00:19:38.247 Test: blockdev nvme passthru vendor specific ...[2024-04-15 01:54:23.798471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.247 [2024-04-15 01:54:23.798498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.798743] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.247 [2024-04-15 01:54:23.798767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.799001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.247 [2024-04-15 01:54:23.799025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:38.247 [2024-04-15 01:54:23.799273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:38.247 [2024-04-15 01:54:23.799297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:38.247 passed 00:19:38.247 Test: blockdev nvme admin passthru ...passed 00:19:38.247 Test: blockdev copy ...passed 00:19:38.247 00:19:38.247 Run Summary: Type Total Ran Passed Failed Inactive 00:19:38.247 suites 1 1 n/a 0 0 00:19:38.247 tests 23 23 23 0 0 00:19:38.247 asserts 152 152 152 0 n/a 00:19:38.247 00:19:38.247 Elapsed time = 1.307 seconds 00:19:38.506 01:54:24 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:38.506 01:54:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:38.506 01:54:24 -- common/autotest_common.sh@10 -- # set +x 00:19:38.506 01:54:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:38.506 01:54:24 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:38.506 01:54:24 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:38.506 01:54:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:38.506 01:54:24 -- nvmf/common.sh@116 -- # sync 00:19:38.506 01:54:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:38.506 01:54:24 -- nvmf/common.sh@119 -- # set +e 00:19:38.506 01:54:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:38.506 01:54:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:38.506 rmmod nvme_tcp 00:19:38.506 rmmod nvme_fabrics 00:19:38.506 rmmod nvme_keyring 00:19:38.506 01:54:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:38.506 01:54:24 -- nvmf/common.sh@123 -- # set -e 00:19:38.506 01:54:24 -- nvmf/common.sh@124 -- # return 0 00:19:38.506 01:54:24 -- nvmf/common.sh@477 -- # '[' -n 2178887 ']' 00:19:38.506 01:54:24 -- nvmf/common.sh@478 -- # killprocess 2178887 00:19:38.506 01:54:24 -- common/autotest_common.sh@926 -- # '[' -z 2178887 ']' 00:19:38.506 01:54:24 -- common/autotest_common.sh@930 -- # kill -0 2178887 00:19:38.506 01:54:24 -- common/autotest_common.sh@931 -- # uname 00:19:38.506 01:54:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:38.506 01:54:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2178887 00:19:38.506 01:54:24 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:38.506 01:54:24 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:38.506 01:54:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2178887' 00:19:38.506 killing process with pid 2178887 00:19:38.506 01:54:24 -- common/autotest_common.sh@945 -- # kill 2178887 00:19:38.506 01:54:24 -- common/autotest_common.sh@950 -- # wait 2178887 00:19:38.764 01:54:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:38.764 01:54:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:38.764 01:54:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:38.764 01:54:24 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:38.764 01:54:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:38.764 01:54:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:38.764 01:54:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:38.764 01:54:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.301 01:54:26 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:41.301 00:19:41.301 real 0m6.900s 00:19:41.301 user 0m12.723s 00:19:41.301 sys 0m2.103s 00:19:41.302 01:54:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:41.302 01:54:26 -- common/autotest_common.sh@10 -- # set +x 00:19:41.302 ************************************ 00:19:41.302 END TEST nvmf_bdevio 00:19:41.302 ************************************ 00:19:41.302 01:54:26 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:19:41.302 01:54:26 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:41.302 01:54:26 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:19:41.302 01:54:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:41.302 01:54:26 -- common/autotest_common.sh@10 -- # set +x 00:19:41.302 ************************************ 00:19:41.302 START TEST nvmf_bdevio_no_huge 00:19:41.302 ************************************ 00:19:41.302 01:54:26 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:19:41.302 * Looking for test storage... 00:19:41.302 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:41.302 01:54:26 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:41.302 01:54:26 -- nvmf/common.sh@7 -- # uname -s 00:19:41.302 01:54:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:41.302 01:54:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:41.302 01:54:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:41.302 01:54:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:41.302 01:54:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:41.302 01:54:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:41.302 01:54:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:41.302 01:54:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:41.302 01:54:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:41.302 01:54:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:41.302 01:54:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.302 01:54:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.302 01:54:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:41.302 01:54:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:41.302 01:54:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:41.302 01:54:26 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:41.302 01:54:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:41.302 01:54:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:41.302 01:54:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:41.302 01:54:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.302 01:54:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.302 01:54:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.302 01:54:26 -- paths/export.sh@5 -- # export PATH 00:19:41.302 01:54:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:41.302 01:54:26 -- nvmf/common.sh@46 -- # : 0 00:19:41.302 01:54:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:41.302 01:54:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:41.302 01:54:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:41.302 01:54:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:41.302 01:54:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:41.302 01:54:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:41.302 01:54:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:41.302 01:54:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:41.302 01:54:26 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:41.302 01:54:26 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:41.302 01:54:26 -- target/bdevio.sh@14 -- # nvmftestinit 00:19:41.302 01:54:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:41.302 01:54:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:41.302 01:54:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:41.302 01:54:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:41.302 01:54:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:41.302 01:54:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:41.302 01:54:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:41.302 01:54:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:41.302 01:54:26 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:41.302 01:54:26 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:41.302 01:54:26 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:41.302 01:54:26 -- common/autotest_common.sh@10 -- # set +x 00:19:43.210 01:54:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:43.210 01:54:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:43.210 01:54:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:43.210 01:54:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:43.210 01:54:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:43.210 01:54:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:43.210 01:54:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:43.210 01:54:28 -- nvmf/common.sh@294 -- # net_devs=() 00:19:43.210 01:54:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:43.210 01:54:28 -- nvmf/common.sh@295 -- # e810=() 00:19:43.210 01:54:28 -- nvmf/common.sh@295 -- # local -ga e810 00:19:43.210 01:54:28 -- nvmf/common.sh@296 -- # x722=() 00:19:43.210 01:54:28 -- nvmf/common.sh@296 -- # local -ga x722 00:19:43.210 01:54:28 -- nvmf/common.sh@297 -- # mlx=() 00:19:43.210 01:54:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:43.210 01:54:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:43.210 01:54:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:43.210 01:54:28 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:43.210 01:54:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.210 01:54:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:43.210 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:43.210 01:54:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:43.210 01:54:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:43.210 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:43.210 01:54:28 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.210 01:54:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.210 01:54:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.210 01:54:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:43.210 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:43.210 01:54:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.210 01:54:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:43.210 01:54:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:43.210 01:54:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:43.210 01:54:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:43.210 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:43.210 01:54:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:43.210 01:54:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:43.210 01:54:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:43.210 01:54:28 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:43.210 01:54:28 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:43.210 01:54:28 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:43.210 01:54:28 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:43.210 01:54:28 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:43.210 01:54:28 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:43.211 01:54:28 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:43.211 01:54:28 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:43.211 01:54:28 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:43.211 01:54:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:43.211 01:54:28 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:43.211 01:54:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:43.211 01:54:28 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:43.211 01:54:28 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:43.211 01:54:28 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:43.211 01:54:28 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:43.211 01:54:28 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:43.211 01:54:28 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:43.211 01:54:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:43.211 01:54:28 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:43.211 01:54:28 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:43.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:43.211 00:19:43.211 --- 10.0.0.2 ping statistics --- 00:19:43.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.211 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:43.211 01:54:28 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:43.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:19:43.211 00:19:43.211 --- 10.0.0.1 ping statistics --- 00:19:43.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.211 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:19:43.211 01:54:28 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.211 01:54:28 -- nvmf/common.sh@410 -- # return 0 00:19:43.211 01:54:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:43.211 01:54:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.211 01:54:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:43.211 01:54:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:43.211 01:54:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.211 01:54:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:43.211 01:54:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:43.211 01:54:28 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:43.211 01:54:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:43.211 01:54:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:43.211 01:54:28 -- common/autotest_common.sh@10 -- # set +x 00:19:43.211 01:54:28 -- nvmf/common.sh@469 -- # nvmfpid=2181125 00:19:43.211 01:54:28 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:19:43.211 01:54:28 -- nvmf/common.sh@470 -- # waitforlisten 2181125 00:19:43.211 01:54:28 -- common/autotest_common.sh@819 -- # '[' -z 2181125 ']' 00:19:43.211 01:54:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.211 01:54:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:43.211 01:54:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.211 01:54:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:43.211 01:54:28 -- common/autotest_common.sh@10 -- # set +x 00:19:43.211 [2024-04-15 01:54:28.592434] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:43.211 [2024-04-15 01:54:28.592526] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:19:43.211 [2024-04-15 01:54:28.663862] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.211 [2024-04-15 01:54:28.754328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:43.211 [2024-04-15 01:54:28.754483] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.211 [2024-04-15 01:54:28.754501] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.211 [2024-04-15 01:54:28.754513] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.211 [2024-04-15 01:54:28.754642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:43.211 [2024-04-15 01:54:28.754710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:43.211 [2024-04-15 01:54:28.754787] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:43.211 [2024-04-15 01:54:28.754782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:44.147 01:54:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:44.147 01:54:29 -- common/autotest_common.sh@852 -- # return 0 00:19:44.147 01:54:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:44.147 01:54:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 01:54:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.147 01:54:29 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.147 01:54:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 [2024-04-15 01:54:29.549512] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.147 01:54:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.147 01:54:29 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:44.147 01:54:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 Malloc0 00:19:44.147 01:54:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.147 01:54:29 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:44.147 01:54:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 01:54:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.147 01:54:29 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:44.147 01:54:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 01:54:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.147 01:54:29 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:44.147 01:54:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:44.147 01:54:29 -- common/autotest_common.sh@10 -- # set +x 00:19:44.147 [2024-04-15 01:54:29.587838] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.147 01:54:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:44.147 01:54:29 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:19:44.147 01:54:29 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:44.147 01:54:29 -- nvmf/common.sh@520 -- # config=() 00:19:44.147 01:54:29 -- nvmf/common.sh@520 -- # local subsystem config 00:19:44.147 01:54:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:44.147 01:54:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:44.147 { 00:19:44.147 "params": { 00:19:44.147 "name": "Nvme$subsystem", 00:19:44.147 "trtype": "$TEST_TRANSPORT", 00:19:44.147 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:44.147 "adrfam": "ipv4", 00:19:44.147 "trsvcid": "$NVMF_PORT", 00:19:44.147 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:44.147 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:44.147 "hdgst": ${hdgst:-false}, 00:19:44.147 "ddgst": ${ddgst:-false} 00:19:44.147 }, 00:19:44.147 "method": "bdev_nvme_attach_controller" 00:19:44.147 } 00:19:44.147 EOF 00:19:44.147 )") 00:19:44.147 01:54:29 -- nvmf/common.sh@542 -- # cat 00:19:44.147 01:54:29 -- nvmf/common.sh@544 -- # jq . 00:19:44.147 01:54:29 -- nvmf/common.sh@545 -- # IFS=, 00:19:44.147 01:54:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:44.147 "params": { 00:19:44.147 "name": "Nvme1", 00:19:44.147 "trtype": "tcp", 00:19:44.147 "traddr": "10.0.0.2", 00:19:44.147 "adrfam": "ipv4", 00:19:44.147 "trsvcid": "4420", 00:19:44.147 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:44.147 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:44.147 "hdgst": false, 00:19:44.147 "ddgst": false 00:19:44.147 }, 00:19:44.147 "method": "bdev_nvme_attach_controller" 00:19:44.147 }' 00:19:44.147 [2024-04-15 01:54:29.629747] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:44.147 [2024-04-15 01:54:29.629839] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2181285 ] 00:19:44.147 [2024-04-15 01:54:29.690935] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:44.147 [2024-04-15 01:54:29.776374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.147 [2024-04-15 01:54:29.776425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:44.147 [2024-04-15 01:54:29.776428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.716 [2024-04-15 01:54:30.081777] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:44.716 [2024-04-15 01:54:30.081829] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:44.716 I/O targets: 00:19:44.716 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:44.716 00:19:44.716 00:19:44.716 CUnit - A unit testing framework for C - Version 2.1-3 00:19:44.716 http://cunit.sourceforge.net/ 00:19:44.716 00:19:44.716 00:19:44.716 Suite: bdevio tests on: Nvme1n1 00:19:44.716 Test: blockdev write read block ...passed 00:19:44.716 Test: blockdev write zeroes read block ...passed 00:19:44.716 Test: blockdev write zeroes read no split ...passed 00:19:44.716 Test: blockdev write zeroes read split ...passed 00:19:44.716 Test: blockdev write zeroes read split partial ...passed 00:19:44.716 Test: blockdev reset ...[2024-04-15 01:54:30.321551] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:44.716 [2024-04-15 01:54:30.321660] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefa720 (9): Bad file descriptor 00:19:44.975 [2024-04-15 01:54:30.382663] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:44.975 passed 00:19:44.975 Test: blockdev write read 8 blocks ...passed 00:19:44.975 Test: blockdev write read size > 128k ...passed 00:19:44.975 Test: blockdev write read invalid size ...passed 00:19:44.975 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:44.975 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:44.975 Test: blockdev write read max offset ...passed 00:19:44.975 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:44.975 Test: blockdev writev readv 8 blocks ...passed 00:19:44.975 Test: blockdev writev readv 30 x 1block ...passed 00:19:45.267 Test: blockdev writev readv block ...passed 00:19:45.267 Test: blockdev writev readv size > 128k ...passed 00:19:45.267 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:45.267 Test: blockdev comparev and writev ...[2024-04-15 01:54:30.643362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.643395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.643420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.643437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.643847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.643872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.643894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.643910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.644325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.644349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.644370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.644386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.644777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.644800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.644821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:45.267 [2024-04-15 01:54:30.644838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:45.267 passed 00:19:45.267 Test: blockdev nvme passthru rw ...passed 00:19:45.267 Test: blockdev nvme passthru vendor specific ...[2024-04-15 01:54:30.728522] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.267 [2024-04-15 01:54:30.728549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.728791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.267 [2024-04-15 01:54:30.728813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.729070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.267 [2024-04-15 01:54:30.729093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:45.267 [2024-04-15 01:54:30.729341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:45.267 [2024-04-15 01:54:30.729366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:45.267 passed 00:19:45.267 Test: blockdev nvme admin passthru ...passed 00:19:45.267 Test: blockdev copy ...passed 00:19:45.267 00:19:45.267 Run Summary: Type Total Ran Passed Failed Inactive 00:19:45.267 suites 1 1 n/a 0 0 00:19:45.267 tests 23 23 23 0 0 00:19:45.267 asserts 152 152 152 0 n/a 00:19:45.267 00:19:45.267 Elapsed time = 1.402 seconds 00:19:45.548 01:54:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:45.549 01:54:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:45.549 01:54:31 -- common/autotest_common.sh@10 -- # set +x 00:19:45.549 01:54:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:45.549 01:54:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:45.549 01:54:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:19:45.549 01:54:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:45.549 01:54:31 -- nvmf/common.sh@116 -- # sync 00:19:45.549 01:54:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:45.549 01:54:31 -- nvmf/common.sh@119 -- # set +e 00:19:45.549 01:54:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:45.549 01:54:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:45.549 rmmod nvme_tcp 00:19:45.549 rmmod nvme_fabrics 00:19:45.549 rmmod nvme_keyring 00:19:45.549 01:54:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:45.549 01:54:31 -- nvmf/common.sh@123 -- # set -e 00:19:45.549 01:54:31 -- nvmf/common.sh@124 -- # return 0 00:19:45.549 01:54:31 -- nvmf/common.sh@477 -- # '[' -n 2181125 ']' 00:19:45.549 01:54:31 -- nvmf/common.sh@478 -- # killprocess 2181125 00:19:45.549 01:54:31 -- common/autotest_common.sh@926 -- # '[' -z 2181125 ']' 00:19:45.549 01:54:31 -- common/autotest_common.sh@930 -- # kill -0 2181125 00:19:45.549 01:54:31 -- common/autotest_common.sh@931 -- # uname 00:19:45.549 01:54:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:45.549 01:54:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2181125 00:19:45.549 01:54:31 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:19:45.549 01:54:31 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:19:45.549 01:54:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2181125' 00:19:45.549 killing process with pid 2181125 00:19:45.549 01:54:31 -- common/autotest_common.sh@945 -- # kill 2181125 00:19:45.549 01:54:31 -- common/autotest_common.sh@950 -- # wait 2181125 00:19:46.118 01:54:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:46.118 01:54:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:46.118 01:54:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:46.118 01:54:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.118 01:54:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:46.118 01:54:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.118 01:54:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.118 01:54:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.025 01:54:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:19:48.025 00:19:48.025 real 0m7.170s 00:19:48.025 user 0m14.414s 00:19:48.025 sys 0m2.449s 00:19:48.025 01:54:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:48.025 01:54:33 -- common/autotest_common.sh@10 -- # set +x 00:19:48.025 ************************************ 00:19:48.025 END TEST nvmf_bdevio_no_huge 00:19:48.025 ************************************ 00:19:48.025 01:54:33 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:48.025 01:54:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:48.025 01:54:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:48.025 01:54:33 -- common/autotest_common.sh@10 -- # set +x 00:19:48.025 ************************************ 00:19:48.025 START TEST nvmf_tls 00:19:48.025 ************************************ 00:19:48.025 01:54:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:19:48.283 * Looking for test storage... 00:19:48.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:48.283 01:54:33 -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:48.283 01:54:33 -- nvmf/common.sh@7 -- # uname -s 00:19:48.283 01:54:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:48.283 01:54:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:48.283 01:54:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:48.283 01:54:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:48.283 01:54:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:48.283 01:54:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:48.283 01:54:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:48.283 01:54:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:48.283 01:54:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:48.283 01:54:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:48.284 01:54:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.284 01:54:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.284 01:54:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:48.284 01:54:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:48.284 01:54:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:48.284 01:54:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:48.284 01:54:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:48.284 01:54:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:48.284 01:54:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:48.284 01:54:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.284 01:54:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.284 01:54:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.284 01:54:33 -- paths/export.sh@5 -- # export PATH 00:19:48.284 01:54:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:48.284 01:54:33 -- nvmf/common.sh@46 -- # : 0 00:19:48.284 01:54:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:48.284 01:54:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:48.284 01:54:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:48.284 01:54:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:48.284 01:54:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:48.284 01:54:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:48.284 01:54:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:48.284 01:54:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:48.284 01:54:33 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:48.284 01:54:33 -- target/tls.sh@71 -- # nvmftestinit 00:19:48.284 01:54:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:48.284 01:54:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.284 01:54:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:48.284 01:54:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:48.284 01:54:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:48.284 01:54:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.284 01:54:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:48.284 01:54:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.284 01:54:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:48.284 01:54:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:48.284 01:54:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:48.284 01:54:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.187 01:54:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:50.187 01:54:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:50.187 01:54:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:50.187 01:54:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:50.187 01:54:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:50.187 01:54:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:50.187 01:54:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:50.187 01:54:35 -- nvmf/common.sh@294 -- # net_devs=() 00:19:50.187 01:54:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:50.187 01:54:35 -- nvmf/common.sh@295 -- # e810=() 00:19:50.187 01:54:35 -- nvmf/common.sh@295 -- # local -ga e810 00:19:50.187 01:54:35 -- nvmf/common.sh@296 -- # x722=() 00:19:50.187 01:54:35 -- nvmf/common.sh@296 -- # local -ga x722 00:19:50.187 01:54:35 -- nvmf/common.sh@297 -- # mlx=() 00:19:50.187 01:54:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:50.187 01:54:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:50.187 01:54:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:50.187 01:54:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:19:50.187 01:54:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:50.187 01:54:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.187 01:54:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:50.187 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:50.187 01:54:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:50.187 01:54:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:50.187 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:50.187 01:54:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:50.187 01:54:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:19:50.187 01:54:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.187 01:54:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.187 01:54:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.187 01:54:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.187 01:54:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:50.187 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:50.187 01:54:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.187 01:54:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:50.187 01:54:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:50.187 01:54:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:50.187 01:54:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:50.188 01:54:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:50.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:50.188 01:54:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:50.188 01:54:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:50.188 01:54:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:50.188 01:54:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:50.188 01:54:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:19:50.188 01:54:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:19:50.188 01:54:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:50.188 01:54:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:50.188 01:54:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:50.188 01:54:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:19:50.188 01:54:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:50.188 01:54:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:50.188 01:54:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:19:50.188 01:54:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:50.188 01:54:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:50.188 01:54:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:19:50.188 01:54:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:19:50.188 01:54:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:19:50.188 01:54:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:50.188 01:54:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:50.188 01:54:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:50.188 01:54:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:19:50.188 01:54:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:50.188 01:54:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:50.188 01:54:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:50.188 01:54:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:19:50.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:50.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:19:50.188 00:19:50.188 --- 10.0.0.2 ping statistics --- 00:19:50.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.188 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:19:50.188 01:54:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:50.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:50.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.180 ms 00:19:50.188 00:19:50.188 --- 10.0.0.1 ping statistics --- 00:19:50.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:50.188 rtt min/avg/max/mdev = 0.180/0.180/0.180/0.000 ms 00:19:50.188 01:54:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:50.188 01:54:35 -- nvmf/common.sh@410 -- # return 0 00:19:50.188 01:54:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:50.188 01:54:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:50.188 01:54:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:50.188 01:54:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:50.188 01:54:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:50.188 01:54:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:50.188 01:54:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:50.188 01:54:35 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:19:50.188 01:54:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:50.188 01:54:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:50.188 01:54:35 -- common/autotest_common.sh@10 -- # set +x 00:19:50.188 01:54:35 -- nvmf/common.sh@469 -- # nvmfpid=2183491 00:19:50.188 01:54:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:19:50.188 01:54:35 -- nvmf/common.sh@470 -- # waitforlisten 2183491 00:19:50.188 01:54:35 -- common/autotest_common.sh@819 -- # '[' -z 2183491 ']' 00:19:50.188 01:54:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:50.188 01:54:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:50.188 01:54:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:50.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:50.188 01:54:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:50.188 01:54:35 -- common/autotest_common.sh@10 -- # set +x 00:19:50.188 [2024-04-15 01:54:35.694353] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:19:50.188 [2024-04-15 01:54:35.694444] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:50.188 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.188 [2024-04-15 01:54:35.765186] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.446 [2024-04-15 01:54:35.854475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:50.446 [2024-04-15 01:54:35.854645] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:50.446 [2024-04-15 01:54:35.854662] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:50.446 [2024-04-15 01:54:35.854674] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:50.446 [2024-04-15 01:54:35.854702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.446 01:54:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:50.446 01:54:35 -- common/autotest_common.sh@852 -- # return 0 00:19:50.446 01:54:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:50.446 01:54:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:50.446 01:54:35 -- common/autotest_common.sh@10 -- # set +x 00:19:50.446 01:54:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.446 01:54:35 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:19:50.446 01:54:35 -- target/tls.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:19:50.705 true 00:19:50.705 01:54:36 -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:50.705 01:54:36 -- target/tls.sh@82 -- # jq -r .tls_version 00:19:50.963 01:54:36 -- target/tls.sh@82 -- # version=0 00:19:50.963 01:54:36 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:19:50.963 01:54:36 -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:51.222 01:54:36 -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.222 01:54:36 -- target/tls.sh@90 -- # jq -r .tls_version 00:19:51.480 01:54:36 -- target/tls.sh@90 -- # version=13 00:19:51.480 01:54:36 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:19:51.480 01:54:36 -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:19:51.480 01:54:37 -- target/tls.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.480 01:54:37 -- target/tls.sh@98 -- # jq -r .tls_version 00:19:51.737 01:54:37 -- target/tls.sh@98 -- # version=7 00:19:51.737 01:54:37 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:19:51.737 01:54:37 -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:51.737 01:54:37 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:19:51.995 01:54:37 -- target/tls.sh@105 -- # ktls=false 00:19:51.995 01:54:37 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:19:51.995 01:54:37 -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:19:52.254 01:54:37 -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.254 01:54:37 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:19:52.514 01:54:38 -- target/tls.sh@113 -- # ktls=true 00:19:52.514 01:54:38 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:19:52.514 01:54:38 -- target/tls.sh@120 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:19:52.772 01:54:38 -- target/tls.sh@121 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:19:52.772 01:54:38 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:19:53.030 01:54:38 -- target/tls.sh@121 -- # ktls=false 00:19:53.030 01:54:38 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:19:53.030 01:54:38 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:19:53.030 01:54:38 -- target/tls.sh@49 -- # local key hash crc 00:19:53.030 01:54:38 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:19:53.030 01:54:38 -- target/tls.sh@51 -- # hash=01 00:19:53.030 01:54:38 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:19:53.030 01:54:38 -- target/tls.sh@52 -- # gzip -1 -c 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # tail -c8 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # head -c 4 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # crc='p$H�' 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.031 01:54:38 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.031 01:54:38 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:19:53.031 01:54:38 -- target/tls.sh@49 -- # local key hash crc 00:19:53.031 01:54:38 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:19:53.031 01:54:38 -- target/tls.sh@51 -- # hash=01 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # gzip -1 -c 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # tail -c8 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # head -c 4 00:19:53.031 01:54:38 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:19:53.031 01:54:38 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:53.031 01:54:38 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:53.031 01:54:38 -- target/tls.sh@130 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.031 01:54:38 -- target/tls.sh@131 -- # key_2_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:53.031 01:54:38 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:19:53.031 01:54:38 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:19:53.031 01:54:38 -- target/tls.sh@136 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.031 01:54:38 -- target/tls.sh@137 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:19:53.031 01:54:38 -- target/tls.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:19:53.290 01:54:38 -- target/tls.sh@140 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:19:53.549 01:54:39 -- target/tls.sh@142 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.549 01:54:39 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:53.549 01:54:39 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:53.809 [2024-04-15 01:54:39.382276] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:53.809 01:54:39 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.069 01:54:39 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.327 [2024-04-15 01:54:39.859558] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.327 [2024-04-15 01:54:39.859777] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.327 01:54:39 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.585 malloc0 00:19:54.585 01:54:40 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.844 01:54:40 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:55.105 01:54:40 -- target/tls.sh@146 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:19:55.105 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.318 Initializing NVMe Controllers 00:20:07.318 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:07.318 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:07.318 Initialization complete. Launching workers. 00:20:07.318 ======================================================== 00:20:07.318 Latency(us) 00:20:07.318 Device Information : IOPS MiB/s Average min max 00:20:07.318 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7874.27 30.76 8130.40 1269.07 8855.72 00:20:07.318 ======================================================== 00:20:07.318 Total : 7874.27 30.76 8130.40 1269.07 8855.72 00:20:07.318 00:20:07.318 01:54:50 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.318 01:54:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:07.318 01:54:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:07.318 01:54:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:07.318 01:54:50 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:07.318 01:54:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:07.318 01:54:50 -- target/tls.sh@28 -- # bdevperf_pid=2185326 00:20:07.318 01:54:50 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:07.318 01:54:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:07.318 01:54:50 -- target/tls.sh@31 -- # waitforlisten 2185326 /var/tmp/bdevperf.sock 00:20:07.318 01:54:50 -- common/autotest_common.sh@819 -- # '[' -z 2185326 ']' 00:20:07.318 01:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.318 01:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:07.319 01:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.319 01:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:07.319 01:54:50 -- common/autotest_common.sh@10 -- # set +x 00:20:07.319 [2024-04-15 01:54:50.786085] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:07.319 [2024-04-15 01:54:50.786167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2185326 ] 00:20:07.319 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.319 [2024-04-15 01:54:50.843438] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:07.319 [2024-04-15 01:54:50.924017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.319 01:54:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.319 01:54:51 -- common/autotest_common.sh@852 -- # return 0 00:20:07.319 01:54:51 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:07.319 [2024-04-15 01:54:52.022570] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:07.319 TLSTESTn1 00:20:07.319 01:54:52 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:07.319 Running I/O for 10 seconds... 00:20:17.336 00:20:17.336 Latency(us) 00:20:17.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.336 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:17.336 Verification LBA range: start 0x0 length 0x2000 00:20:17.336 TLSTESTn1 : 10.05 1115.55 4.36 0.00 0.00 114480.96 8349.77 137479.96 00:20:17.336 =================================================================================================================== 00:20:17.336 Total : 1115.55 4.36 0.00 0.00 114480.96 8349.77 137479.96 00:20:17.336 0 00:20:17.336 01:55:02 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:17.336 01:55:02 -- target/tls.sh@45 -- # killprocess 2185326 00:20:17.336 01:55:02 -- common/autotest_common.sh@926 -- # '[' -z 2185326 ']' 00:20:17.336 01:55:02 -- common/autotest_common.sh@930 -- # kill -0 2185326 00:20:17.336 01:55:02 -- common/autotest_common.sh@931 -- # uname 00:20:17.336 01:55:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:17.336 01:55:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2185326 00:20:17.336 01:55:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:17.336 01:55:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:17.336 01:55:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2185326' 00:20:17.336 killing process with pid 2185326 00:20:17.336 01:55:02 -- common/autotest_common.sh@945 -- # kill 2185326 00:20:17.336 Received shutdown signal, test time was about 10.000000 seconds 00:20:17.336 00:20:17.336 Latency(us) 00:20:17.336 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:17.336 =================================================================================================================== 00:20:17.336 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:17.336 01:55:02 -- common/autotest_common.sh@950 -- # wait 2185326 00:20:17.336 01:55:02 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:17.336 01:55:02 -- common/autotest_common.sh@640 -- # local es=0 00:20:17.336 01:55:02 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:17.336 01:55:02 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:17.336 01:55:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.336 01:55:02 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:17.336 01:55:02 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:17.336 01:55:02 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:17.336 01:55:02 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:17.336 01:55:02 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:17.336 01:55:02 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:17.336 01:55:02 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt' 00:20:17.336 01:55:02 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:17.336 01:55:02 -- target/tls.sh@28 -- # bdevperf_pid=2186702 00:20:17.336 01:55:02 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:17.336 01:55:02 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:17.336 01:55:02 -- target/tls.sh@31 -- # waitforlisten 2186702 /var/tmp/bdevperf.sock 00:20:17.336 01:55:02 -- common/autotest_common.sh@819 -- # '[' -z 2186702 ']' 00:20:17.336 01:55:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:17.336 01:55:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:17.336 01:55:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:17.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:17.336 01:55:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:17.336 01:55:02 -- common/autotest_common.sh@10 -- # set +x 00:20:17.336 [2024-04-15 01:55:02.615546] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:17.336 [2024-04-15 01:55:02.615631] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186702 ] 00:20:17.336 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.336 [2024-04-15 01:55:02.678189] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.336 [2024-04-15 01:55:02.761038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:18.277 01:55:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:18.277 01:55:03 -- common/autotest_common.sh@852 -- # return 0 00:20:18.277 01:55:03 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt 00:20:18.277 [2024-04-15 01:55:03.825184] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:18.277 [2024-04-15 01:55:03.835729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:18.277 [2024-04-15 01:55:03.836367] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c27f0 (107): Transport endpoint is not connected 00:20:18.277 [2024-04-15 01:55:03.837357] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10c27f0 (9): Bad file descriptor 00:20:18.277 [2024-04-15 01:55:03.838357] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:18.277 [2024-04-15 01:55:03.838378] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:18.277 [2024-04-15 01:55:03.838394] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:18.277 request: 00:20:18.277 { 00:20:18.277 "name": "TLSTEST", 00:20:18.277 "trtype": "tcp", 00:20:18.277 "traddr": "10.0.0.2", 00:20:18.277 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:18.277 "adrfam": "ipv4", 00:20:18.277 "trsvcid": "4420", 00:20:18.277 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:18.277 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt", 00:20:18.277 "method": "bdev_nvme_attach_controller", 00:20:18.277 "req_id": 1 00:20:18.277 } 00:20:18.277 Got JSON-RPC error response 00:20:18.277 response: 00:20:18.277 { 00:20:18.277 "code": -32602, 00:20:18.277 "message": "Invalid parameters" 00:20:18.277 } 00:20:18.277 01:55:03 -- target/tls.sh@36 -- # killprocess 2186702 00:20:18.277 01:55:03 -- common/autotest_common.sh@926 -- # '[' -z 2186702 ']' 00:20:18.277 01:55:03 -- common/autotest_common.sh@930 -- # kill -0 2186702 00:20:18.277 01:55:03 -- common/autotest_common.sh@931 -- # uname 00:20:18.277 01:55:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:18.277 01:55:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2186702 00:20:18.277 01:55:03 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:18.277 01:55:03 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:18.277 01:55:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2186702' 00:20:18.277 killing process with pid 2186702 00:20:18.277 01:55:03 -- common/autotest_common.sh@945 -- # kill 2186702 00:20:18.277 Received shutdown signal, test time was about 10.000000 seconds 00:20:18.277 00:20:18.277 Latency(us) 00:20:18.277 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:18.277 =================================================================================================================== 00:20:18.277 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:18.277 01:55:03 -- common/autotest_common.sh@950 -- # wait 2186702 00:20:18.536 01:55:04 -- target/tls.sh@37 -- # return 1 00:20:18.536 01:55:04 -- common/autotest_common.sh@643 -- # es=1 00:20:18.536 01:55:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:18.536 01:55:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:18.536 01:55:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:18.536 01:55:04 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.536 01:55:04 -- common/autotest_common.sh@640 -- # local es=0 00:20:18.536 01:55:04 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.536 01:55:04 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:18.536 01:55:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.536 01:55:04 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:18.536 01:55:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:18.536 01:55:04 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:18.536 01:55:04 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:18.536 01:55:04 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:18.537 01:55:04 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:18.537 01:55:04 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:18.537 01:55:04 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:18.537 01:55:04 -- target/tls.sh@28 -- # bdevperf_pid=2186969 00:20:18.537 01:55:04 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:18.537 01:55:04 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:18.537 01:55:04 -- target/tls.sh@31 -- # waitforlisten 2186969 /var/tmp/bdevperf.sock 00:20:18.537 01:55:04 -- common/autotest_common.sh@819 -- # '[' -z 2186969 ']' 00:20:18.537 01:55:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:18.537 01:55:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:18.537 01:55:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:18.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:18.537 01:55:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:18.537 01:55:04 -- common/autotest_common.sh@10 -- # set +x 00:20:18.537 [2024-04-15 01:55:04.147292] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:18.537 [2024-04-15 01:55:04.147397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2186969 ] 00:20:18.537 EAL: No free 2048 kB hugepages reported on node 1 00:20:18.797 [2024-04-15 01:55:04.211656] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.797 [2024-04-15 01:55:04.300137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:19.735 01:55:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:19.735 01:55:05 -- common/autotest_common.sh@852 -- # return 0 00:20:19.735 01:55:05 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.735 [2024-04-15 01:55:05.345005] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:19.735 [2024-04-15 01:55:05.350636] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.735 [2024-04-15 01:55:05.350677] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:19.735 [2024-04-15 01:55:05.350718] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:19.735 [2024-04-15 01:55:05.351182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12817f0 (107): Transport endpoint is not connected 00:20:19.735 [2024-04-15 01:55:05.352171] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12817f0 (9): Bad file descriptor 00:20:19.735 [2024-04-15 01:55:05.353169] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:19.735 [2024-04-15 01:55:05.353191] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:19.735 [2024-04-15 01:55:05.353208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:19.735 request: 00:20:19.735 { 00:20:19.735 "name": "TLSTEST", 00:20:19.735 "trtype": "tcp", 00:20:19.735 "traddr": "10.0.0.2", 00:20:19.735 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:19.735 "adrfam": "ipv4", 00:20:19.735 "trsvcid": "4420", 00:20:19.735 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:19.735 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:19.735 "method": "bdev_nvme_attach_controller", 00:20:19.735 "req_id": 1 00:20:19.735 } 00:20:19.735 Got JSON-RPC error response 00:20:19.735 response: 00:20:19.735 { 00:20:19.735 "code": -32602, 00:20:19.735 "message": "Invalid parameters" 00:20:19.735 } 00:20:19.735 01:55:05 -- target/tls.sh@36 -- # killprocess 2186969 00:20:19.735 01:55:05 -- common/autotest_common.sh@926 -- # '[' -z 2186969 ']' 00:20:19.735 01:55:05 -- common/autotest_common.sh@930 -- # kill -0 2186969 00:20:19.735 01:55:05 -- common/autotest_common.sh@931 -- # uname 00:20:19.735 01:55:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:19.735 01:55:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2186969 00:20:19.993 01:55:05 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:19.993 01:55:05 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:19.993 01:55:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2186969' 00:20:19.993 killing process with pid 2186969 00:20:19.993 01:55:05 -- common/autotest_common.sh@945 -- # kill 2186969 00:20:19.993 Received shutdown signal, test time was about 10.000000 seconds 00:20:19.993 00:20:19.993 Latency(us) 00:20:19.993 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:19.993 =================================================================================================================== 00:20:19.993 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:19.993 01:55:05 -- common/autotest_common.sh@950 -- # wait 2186969 00:20:19.993 01:55:05 -- target/tls.sh@37 -- # return 1 00:20:19.993 01:55:05 -- common/autotest_common.sh@643 -- # es=1 00:20:19.993 01:55:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:19.993 01:55:05 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:19.993 01:55:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:19.993 01:55:05 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.993 01:55:05 -- common/autotest_common.sh@640 -- # local es=0 00:20:19.993 01:55:05 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.993 01:55:05 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:19.993 01:55:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.993 01:55:05 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:19.993 01:55:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:19.993 01:55:05 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:19.993 01:55:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:19.993 01:55:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:19.993 01:55:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:19.993 01:55:05 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt' 00:20:19.994 01:55:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:19.994 01:55:05 -- target/tls.sh@28 -- # bdevperf_pid=2187126 00:20:19.994 01:55:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:19.994 01:55:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:19.994 01:55:05 -- target/tls.sh@31 -- # waitforlisten 2187126 /var/tmp/bdevperf.sock 00:20:19.994 01:55:05 -- common/autotest_common.sh@819 -- # '[' -z 2187126 ']' 00:20:19.994 01:55:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:19.994 01:55:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:19.994 01:55:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:19.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:19.994 01:55:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:19.994 01:55:05 -- common/autotest_common.sh@10 -- # set +x 00:20:19.994 [2024-04-15 01:55:05.634274] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:19.994 [2024-04-15 01:55:05.634357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187126 ] 00:20:20.254 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.254 [2024-04-15 01:55:05.694712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.254 [2024-04-15 01:55:05.773548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:21.192 01:55:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:21.192 01:55:06 -- common/autotest_common.sh@852 -- # return 0 00:20:21.192 01:55:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt 00:20:21.192 [2024-04-15 01:55:06.781861] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.192 [2024-04-15 01:55:06.793191] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.192 [2024-04-15 01:55:06.793223] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:21.192 [2024-04-15 01:55:06.793261] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:21.192 [2024-04-15 01:55:06.793796] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c817f0 (107): Transport endpoint is not connected 00:20:21.192 [2024-04-15 01:55:06.794784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c817f0 (9): Bad file descriptor 00:20:21.192 [2024-04-15 01:55:06.795782] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:21.192 [2024-04-15 01:55:06.795801] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:21.192 [2024-04-15 01:55:06.795831] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:21.192 request: 00:20:21.192 { 00:20:21.192 "name": "TLSTEST", 00:20:21.192 "trtype": "tcp", 00:20:21.192 "traddr": "10.0.0.2", 00:20:21.192 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:21.192 "adrfam": "ipv4", 00:20:21.192 "trsvcid": "4420", 00:20:21.192 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:21.192 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt", 00:20:21.192 "method": "bdev_nvme_attach_controller", 00:20:21.192 "req_id": 1 00:20:21.192 } 00:20:21.192 Got JSON-RPC error response 00:20:21.192 response: 00:20:21.192 { 00:20:21.192 "code": -32602, 00:20:21.193 "message": "Invalid parameters" 00:20:21.193 } 00:20:21.193 01:55:06 -- target/tls.sh@36 -- # killprocess 2187126 00:20:21.193 01:55:06 -- common/autotest_common.sh@926 -- # '[' -z 2187126 ']' 00:20:21.193 01:55:06 -- common/autotest_common.sh@930 -- # kill -0 2187126 00:20:21.193 01:55:06 -- common/autotest_common.sh@931 -- # uname 00:20:21.193 01:55:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:21.193 01:55:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2187126 00:20:21.193 01:55:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:21.193 01:55:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:21.193 01:55:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2187126' 00:20:21.193 killing process with pid 2187126 00:20:21.193 01:55:06 -- common/autotest_common.sh@945 -- # kill 2187126 00:20:21.193 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.193 00:20:21.193 Latency(us) 00:20:21.193 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.193 =================================================================================================================== 00:20:21.193 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:21.193 01:55:06 -- common/autotest_common.sh@950 -- # wait 2187126 00:20:21.452 01:55:07 -- target/tls.sh@37 -- # return 1 00:20:21.452 01:55:07 -- common/autotest_common.sh@643 -- # es=1 00:20:21.452 01:55:07 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:21.452 01:55:07 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:21.452 01:55:07 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:21.452 01:55:07 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.452 01:55:07 -- common/autotest_common.sh@640 -- # local es=0 00:20:21.452 01:55:07 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.452 01:55:07 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:21.452 01:55:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:21.452 01:55:07 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:21.452 01:55:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:21.452 01:55:07 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:21.452 01:55:07 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:21.452 01:55:07 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:21.452 01:55:07 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:21.452 01:55:07 -- target/tls.sh@23 -- # psk= 00:20:21.452 01:55:07 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:21.452 01:55:07 -- target/tls.sh@28 -- # bdevperf_pid=2187271 00:20:21.452 01:55:07 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:21.452 01:55:07 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:21.452 01:55:07 -- target/tls.sh@31 -- # waitforlisten 2187271 /var/tmp/bdevperf.sock 00:20:21.452 01:55:07 -- common/autotest_common.sh@819 -- # '[' -z 2187271 ']' 00:20:21.452 01:55:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:21.452 01:55:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:21.452 01:55:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:21.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:21.452 01:55:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:21.452 01:55:07 -- common/autotest_common.sh@10 -- # set +x 00:20:21.452 [2024-04-15 01:55:07.063873] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:21.452 [2024-04-15 01:55:07.063951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187271 ] 00:20:21.452 EAL: No free 2048 kB hugepages reported on node 1 00:20:21.712 [2024-04-15 01:55:07.124019] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.712 [2024-04-15 01:55:07.211442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.649 01:55:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:22.649 01:55:08 -- common/autotest_common.sh@852 -- # return 0 00:20:22.649 01:55:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:22.649 [2024-04-15 01:55:08.257181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:22.649 [2024-04-15 01:55:08.259120] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edbec0 (9): Bad file descriptor 00:20:22.649 [2024-04-15 01:55:08.260114] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:22.649 [2024-04-15 01:55:08.260136] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:22.649 [2024-04-15 01:55:08.260154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:22.649 request: 00:20:22.649 { 00:20:22.649 "name": "TLSTEST", 00:20:22.649 "trtype": "tcp", 00:20:22.649 "traddr": "10.0.0.2", 00:20:22.649 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:22.649 "adrfam": "ipv4", 00:20:22.649 "trsvcid": "4420", 00:20:22.649 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:22.649 "method": "bdev_nvme_attach_controller", 00:20:22.649 "req_id": 1 00:20:22.649 } 00:20:22.649 Got JSON-RPC error response 00:20:22.649 response: 00:20:22.649 { 00:20:22.649 "code": -32602, 00:20:22.649 "message": "Invalid parameters" 00:20:22.649 } 00:20:22.649 01:55:08 -- target/tls.sh@36 -- # killprocess 2187271 00:20:22.649 01:55:08 -- common/autotest_common.sh@926 -- # '[' -z 2187271 ']' 00:20:22.649 01:55:08 -- common/autotest_common.sh@930 -- # kill -0 2187271 00:20:22.649 01:55:08 -- common/autotest_common.sh@931 -- # uname 00:20:22.649 01:55:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.649 01:55:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2187271 00:20:22.908 01:55:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:22.908 01:55:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:22.908 01:55:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2187271' 00:20:22.908 killing process with pid 2187271 00:20:22.908 01:55:08 -- common/autotest_common.sh@945 -- # kill 2187271 00:20:22.908 Received shutdown signal, test time was about 10.000000 seconds 00:20:22.908 00:20:22.908 Latency(us) 00:20:22.908 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:22.908 =================================================================================================================== 00:20:22.908 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:22.908 01:55:08 -- common/autotest_common.sh@950 -- # wait 2187271 00:20:22.908 01:55:08 -- target/tls.sh@37 -- # return 1 00:20:22.908 01:55:08 -- common/autotest_common.sh@643 -- # es=1 00:20:22.908 01:55:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:22.908 01:55:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:22.908 01:55:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:22.908 01:55:08 -- target/tls.sh@167 -- # killprocess 2183491 00:20:22.908 01:55:08 -- common/autotest_common.sh@926 -- # '[' -z 2183491 ']' 00:20:22.909 01:55:08 -- common/autotest_common.sh@930 -- # kill -0 2183491 00:20:22.909 01:55:08 -- common/autotest_common.sh@931 -- # uname 00:20:22.909 01:55:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:22.909 01:55:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2183491 00:20:22.909 01:55:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:22.909 01:55:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:22.909 01:55:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2183491' 00:20:22.909 killing process with pid 2183491 00:20:22.909 01:55:08 -- common/autotest_common.sh@945 -- # kill 2183491 00:20:22.909 01:55:08 -- common/autotest_common.sh@950 -- # wait 2183491 00:20:23.168 01:55:08 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:20:23.168 01:55:08 -- target/tls.sh@49 -- # local key hash crc 00:20:23.168 01:55:08 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:23.168 01:55:08 -- target/tls.sh@51 -- # hash=02 00:20:23.168 01:55:08 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:20:23.168 01:55:08 -- target/tls.sh@52 -- # gzip -1 -c 00:20:23.168 01:55:08 -- target/tls.sh@52 -- # tail -c8 00:20:23.168 01:55:08 -- target/tls.sh@52 -- # head -c 4 00:20:23.168 01:55:08 -- target/tls.sh@52 -- # crc='�e�'\''' 00:20:23.168 01:55:08 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:20:23.168 01:55:08 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:20:23.168 01:55:08 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:23.168 01:55:08 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:23.168 01:55:08 -- target/tls.sh@169 -- # key_long_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.168 01:55:08 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:23.168 01:55:08 -- target/tls.sh@171 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:23.168 01:55:08 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:20:23.168 01:55:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:23.168 01:55:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:23.168 01:55:08 -- common/autotest_common.sh@10 -- # set +x 00:20:23.428 01:55:08 -- nvmf/common.sh@469 -- # nvmfpid=2187565 00:20:23.428 01:55:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:23.428 01:55:08 -- nvmf/common.sh@470 -- # waitforlisten 2187565 00:20:23.428 01:55:08 -- common/autotest_common.sh@819 -- # '[' -z 2187565 ']' 00:20:23.428 01:55:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.428 01:55:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:23.428 01:55:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:23.428 01:55:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:23.428 01:55:08 -- common/autotest_common.sh@10 -- # set +x 00:20:23.428 [2024-04-15 01:55:08.860946] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:23.428 [2024-04-15 01:55:08.861033] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:23.428 EAL: No free 2048 kB hugepages reported on node 1 00:20:23.428 [2024-04-15 01:55:08.926171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:23.428 [2024-04-15 01:55:09.012552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:23.428 [2024-04-15 01:55:09.012719] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:23.428 [2024-04-15 01:55:09.012737] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:23.428 [2024-04-15 01:55:09.012749] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:23.428 [2024-04-15 01:55:09.012782] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:24.367 01:55:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:24.367 01:55:09 -- common/autotest_common.sh@852 -- # return 0 00:20:24.367 01:55:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:24.367 01:55:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:24.367 01:55:09 -- common/autotest_common.sh@10 -- # set +x 00:20:24.367 01:55:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.367 01:55:09 -- target/tls.sh@174 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.367 01:55:09 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:24.367 01:55:09 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:24.626 [2024-04-15 01:55:10.060614] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:24.626 01:55:10 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:24.884 01:55:10 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:25.144 [2024-04-15 01:55:10.553943] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:25.144 [2024-04-15 01:55:10.554197] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:25.144 01:55:10 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:25.403 malloc0 00:20:25.403 01:55:10 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:25.662 01:55:11 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.922 01:55:11 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:25.922 01:55:11 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:25.922 01:55:11 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:25.922 01:55:11 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:25.922 01:55:11 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:25.922 01:55:11 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:25.922 01:55:11 -- target/tls.sh@28 -- # bdevperf_pid=2187865 00:20:25.922 01:55:11 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:25.922 01:55:11 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:25.922 01:55:11 -- target/tls.sh@31 -- # waitforlisten 2187865 /var/tmp/bdevperf.sock 00:20:25.922 01:55:11 -- common/autotest_common.sh@819 -- # '[' -z 2187865 ']' 00:20:25.922 01:55:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:25.922 01:55:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:25.922 01:55:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:25.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:25.922 01:55:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:25.922 01:55:11 -- common/autotest_common.sh@10 -- # set +x 00:20:25.922 [2024-04-15 01:55:11.373358] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:25.922 [2024-04-15 01:55:11.373437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2187865 ] 00:20:25.922 EAL: No free 2048 kB hugepages reported on node 1 00:20:25.922 [2024-04-15 01:55:11.430731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:25.922 [2024-04-15 01:55:11.516259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.857 01:55:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:26.857 01:55:12 -- common/autotest_common.sh@852 -- # return 0 00:20:26.857 01:55:12 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:27.115 [2024-04-15 01:55:12.558065] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:27.115 TLSTESTn1 00:20:27.115 01:55:12 -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:27.374 Running I/O for 10 seconds... 00:20:37.386 00:20:37.386 Latency(us) 00:20:37.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.386 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:37.386 Verification LBA range: start 0x0 length 0x2000 00:20:37.386 TLSTESTn1 : 10.15 986.00 3.85 0.00 0.00 128966.78 4878.79 136703.24 00:20:37.386 =================================================================================================================== 00:20:37.386 Total : 986.00 3.85 0.00 0.00 128966.78 4878.79 136703.24 00:20:37.386 0 00:20:37.386 01:55:22 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:37.386 01:55:22 -- target/tls.sh@45 -- # killprocess 2187865 00:20:37.386 01:55:22 -- common/autotest_common.sh@926 -- # '[' -z 2187865 ']' 00:20:37.386 01:55:22 -- common/autotest_common.sh@930 -- # kill -0 2187865 00:20:37.386 01:55:22 -- common/autotest_common.sh@931 -- # uname 00:20:37.386 01:55:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:37.386 01:55:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2187865 00:20:37.386 01:55:22 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:37.386 01:55:22 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:37.386 01:55:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2187865' 00:20:37.386 killing process with pid 2187865 00:20:37.386 01:55:22 -- common/autotest_common.sh@945 -- # kill 2187865 00:20:37.386 Received shutdown signal, test time was about 10.000000 seconds 00:20:37.386 00:20:37.386 Latency(us) 00:20:37.386 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:37.386 =================================================================================================================== 00:20:37.386 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:37.386 01:55:22 -- common/autotest_common.sh@950 -- # wait 2187865 00:20:37.645 01:55:23 -- target/tls.sh@179 -- # chmod 0666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.645 01:55:23 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.645 01:55:23 -- common/autotest_common.sh@640 -- # local es=0 00:20:37.645 01:55:23 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.645 01:55:23 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:20:37.645 01:55:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.645 01:55:23 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:20:37.645 01:55:23 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:37.645 01:55:23 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:37.645 01:55:23 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.645 01:55:23 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.645 01:55:23 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.645 01:55:23 -- target/tls.sh@23 -- # psk='--psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt' 00:20:37.645 01:55:23 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.645 01:55:23 -- target/tls.sh@28 -- # bdevperf_pid=2189334 00:20:37.645 01:55:23 -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.645 01:55:23 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.645 01:55:23 -- target/tls.sh@31 -- # waitforlisten 2189334 /var/tmp/bdevperf.sock 00:20:37.645 01:55:23 -- common/autotest_common.sh@819 -- # '[' -z 2189334 ']' 00:20:37.645 01:55:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.645 01:55:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:37.645 01:55:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.645 01:55:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:37.645 01:55:23 -- common/autotest_common.sh@10 -- # set +x 00:20:37.646 [2024-04-15 01:55:23.236724] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:37.646 [2024-04-15 01:55:23.236810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2189334 ] 00:20:37.646 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.905 [2024-04-15 01:55:23.294733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.905 [2024-04-15 01:55:23.375363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:38.843 01:55:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:38.843 01:55:24 -- common/autotest_common.sh@852 -- # return 0 00:20:38.843 01:55:24 -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:38.843 [2024-04-15 01:55:24.420829] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:38.843 [2024-04-15 01:55:24.420903] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:38.843 request: 00:20:38.843 { 00:20:38.843 "name": "TLSTEST", 00:20:38.843 "trtype": "tcp", 00:20:38.843 "traddr": "10.0.0.2", 00:20:38.843 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:38.843 "adrfam": "ipv4", 00:20:38.843 "trsvcid": "4420", 00:20:38.843 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:38.844 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:38.844 "method": "bdev_nvme_attach_controller", 00:20:38.844 "req_id": 1 00:20:38.844 } 00:20:38.844 Got JSON-RPC error response 00:20:38.844 response: 00:20:38.844 { 00:20:38.844 "code": -22, 00:20:38.844 "message": "Could not retrieve PSK from file: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:38.844 } 00:20:38.844 01:55:24 -- target/tls.sh@36 -- # killprocess 2189334 00:20:38.844 01:55:24 -- common/autotest_common.sh@926 -- # '[' -z 2189334 ']' 00:20:38.844 01:55:24 -- common/autotest_common.sh@930 -- # kill -0 2189334 00:20:38.844 01:55:24 -- common/autotest_common.sh@931 -- # uname 00:20:38.844 01:55:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:38.844 01:55:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2189334 00:20:38.844 01:55:24 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:38.844 01:55:24 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:38.844 01:55:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2189334' 00:20:38.844 killing process with pid 2189334 00:20:38.844 01:55:24 -- common/autotest_common.sh@945 -- # kill 2189334 00:20:38.844 Received shutdown signal, test time was about 10.000000 seconds 00:20:38.844 00:20:38.844 Latency(us) 00:20:38.844 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:38.844 =================================================================================================================== 00:20:38.844 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:38.844 01:55:24 -- common/autotest_common.sh@950 -- # wait 2189334 00:20:39.103 01:55:24 -- target/tls.sh@37 -- # return 1 00:20:39.103 01:55:24 -- common/autotest_common.sh@643 -- # es=1 00:20:39.103 01:55:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:39.103 01:55:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:39.103 01:55:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:39.103 01:55:24 -- target/tls.sh@183 -- # killprocess 2187565 00:20:39.103 01:55:24 -- common/autotest_common.sh@926 -- # '[' -z 2187565 ']' 00:20:39.104 01:55:24 -- common/autotest_common.sh@930 -- # kill -0 2187565 00:20:39.104 01:55:24 -- common/autotest_common.sh@931 -- # uname 00:20:39.104 01:55:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:39.104 01:55:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2187565 00:20:39.104 01:55:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:39.104 01:55:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:39.104 01:55:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2187565' 00:20:39.104 killing process with pid 2187565 00:20:39.104 01:55:24 -- common/autotest_common.sh@945 -- # kill 2187565 00:20:39.104 01:55:24 -- common/autotest_common.sh@950 -- # wait 2187565 00:20:39.363 01:55:24 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:20:39.363 01:55:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:39.363 01:55:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:39.363 01:55:24 -- common/autotest_common.sh@10 -- # set +x 00:20:39.363 01:55:24 -- nvmf/common.sh@469 -- # nvmfpid=2189517 00:20:39.363 01:55:24 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:39.363 01:55:24 -- nvmf/common.sh@470 -- # waitforlisten 2189517 00:20:39.363 01:55:24 -- common/autotest_common.sh@819 -- # '[' -z 2189517 ']' 00:20:39.363 01:55:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.363 01:55:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:39.363 01:55:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.363 01:55:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:39.363 01:55:24 -- common/autotest_common.sh@10 -- # set +x 00:20:39.623 [2024-04-15 01:55:25.021887] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:39.623 [2024-04-15 01:55:25.021979] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.623 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.623 [2024-04-15 01:55:25.089828] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.623 [2024-04-15 01:55:25.175985] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:39.623 [2024-04-15 01:55:25.176171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.623 [2024-04-15 01:55:25.176192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.623 [2024-04-15 01:55:25.176206] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.623 [2024-04-15 01:55:25.176243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.561 01:55:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:40.561 01:55:25 -- common/autotest_common.sh@852 -- # return 0 00:20:40.561 01:55:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:40.561 01:55:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:40.561 01:55:25 -- common/autotest_common.sh@10 -- # set +x 00:20:40.561 01:55:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.561 01:55:25 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.561 01:55:25 -- common/autotest_common.sh@640 -- # local es=0 00:20:40.561 01:55:25 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.561 01:55:25 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:20:40.561 01:55:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:40.561 01:55:25 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:20:40.561 01:55:25 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:20:40.561 01:55:25 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.561 01:55:25 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:40.561 01:55:25 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:40.561 [2024-04-15 01:55:26.181921] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.561 01:55:26 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:40.820 01:55:26 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:41.078 [2024-04-15 01:55:26.655179] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:41.078 [2024-04-15 01:55:26.655406] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.078 01:55:26 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:41.336 malloc0 00:20:41.336 01:55:26 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:41.594 01:55:27 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:41.853 [2024-04-15 01:55:27.376730] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:20:41.853 [2024-04-15 01:55:27.376772] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:20:41.853 [2024-04-15 01:55:27.376797] subsystem.c: 840:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:20:41.853 request: 00:20:41.853 { 00:20:41.853 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.853 "host": "nqn.2016-06.io.spdk:host1", 00:20:41.853 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:41.853 "method": "nvmf_subsystem_add_host", 00:20:41.853 "req_id": 1 00:20:41.853 } 00:20:41.853 Got JSON-RPC error response 00:20:41.853 response: 00:20:41.853 { 00:20:41.853 "code": -32603, 00:20:41.853 "message": "Internal error" 00:20:41.853 } 00:20:41.853 01:55:27 -- common/autotest_common.sh@643 -- # es=1 00:20:41.853 01:55:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:20:41.853 01:55:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:20:41.853 01:55:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:20:41.853 01:55:27 -- target/tls.sh@189 -- # killprocess 2189517 00:20:41.853 01:55:27 -- common/autotest_common.sh@926 -- # '[' -z 2189517 ']' 00:20:41.853 01:55:27 -- common/autotest_common.sh@930 -- # kill -0 2189517 00:20:41.853 01:55:27 -- common/autotest_common.sh@931 -- # uname 00:20:41.853 01:55:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:41.853 01:55:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2189517 00:20:41.853 01:55:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:41.853 01:55:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:41.853 01:55:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2189517' 00:20:41.853 killing process with pid 2189517 00:20:41.853 01:55:27 -- common/autotest_common.sh@945 -- # kill 2189517 00:20:41.853 01:55:27 -- common/autotest_common.sh@950 -- # wait 2189517 00:20:42.112 01:55:27 -- target/tls.sh@190 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:42.112 01:55:27 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:20:42.112 01:55:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:42.112 01:55:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:42.112 01:55:27 -- common/autotest_common.sh@10 -- # set +x 00:20:42.112 01:55:27 -- nvmf/common.sh@469 -- # nvmfpid=2189899 00:20:42.112 01:55:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:42.112 01:55:27 -- nvmf/common.sh@470 -- # waitforlisten 2189899 00:20:42.112 01:55:27 -- common/autotest_common.sh@819 -- # '[' -z 2189899 ']' 00:20:42.112 01:55:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.112 01:55:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:42.112 01:55:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.112 01:55:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:42.112 01:55:27 -- common/autotest_common.sh@10 -- # set +x 00:20:42.112 [2024-04-15 01:55:27.698791] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:42.112 [2024-04-15 01:55:27.698863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.112 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.372 [2024-04-15 01:55:27.762187] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.372 [2024-04-15 01:55:27.846233] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:42.372 [2024-04-15 01:55:27.846399] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.372 [2024-04-15 01:55:27.846417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.372 [2024-04-15 01:55:27.846429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.372 [2024-04-15 01:55:27.846459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:43.309 01:55:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:43.309 01:55:28 -- common/autotest_common.sh@852 -- # return 0 00:20:43.309 01:55:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:43.309 01:55:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:43.309 01:55:28 -- common/autotest_common.sh@10 -- # set +x 00:20:43.309 01:55:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.309 01:55:28 -- target/tls.sh@194 -- # setup_nvmf_tgt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.309 01:55:28 -- target/tls.sh@58 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:43.309 01:55:28 -- target/tls.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:43.309 [2024-04-15 01:55:28.881234] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.309 01:55:28 -- target/tls.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:43.567 01:55:29 -- target/tls.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:43.825 [2024-04-15 01:55:29.406674] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:43.825 [2024-04-15 01:55:29.406913] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.825 01:55:29 -- target/tls.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:44.083 malloc0 00:20:44.083 01:55:29 -- target/tls.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:44.342 01:55:29 -- target/tls.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:44.600 01:55:30 -- target/tls.sh@197 -- # bdevperf_pid=2190244 00:20:44.600 01:55:30 -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:44.600 01:55:30 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:44.600 01:55:30 -- target/tls.sh@200 -- # waitforlisten 2190244 /var/tmp/bdevperf.sock 00:20:44.600 01:55:30 -- common/autotest_common.sh@819 -- # '[' -z 2190244 ']' 00:20:44.600 01:55:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:44.600 01:55:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:44.600 01:55:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:44.600 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:44.600 01:55:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:44.600 01:55:30 -- common/autotest_common.sh@10 -- # set +x 00:20:44.860 [2024-04-15 01:55:30.279364] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:44.860 [2024-04-15 01:55:30.279455] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190244 ] 00:20:44.860 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.860 [2024-04-15 01:55:30.338673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.860 [2024-04-15 01:55:30.422930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:45.798 01:55:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:45.798 01:55:31 -- common/autotest_common.sh@852 -- # return 0 00:20:45.798 01:55:31 -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:20:46.057 [2024-04-15 01:55:31.496459] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:46.057 TLSTESTn1 00:20:46.057 01:55:31 -- target/tls.sh@205 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:20:46.315 01:55:31 -- target/tls.sh@205 -- # tgtconf='{ 00:20:46.316 "subsystems": [ 00:20:46.316 { 00:20:46.316 "subsystem": "iobuf", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "iobuf_set_options", 00:20:46.316 "params": { 00:20:46.316 "small_pool_count": 8192, 00:20:46.316 "large_pool_count": 1024, 00:20:46.316 "small_bufsize": 8192, 00:20:46.316 "large_bufsize": 135168 00:20:46.316 } 00:20:46.316 } 00:20:46.316 ] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "sock", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "sock_impl_set_options", 00:20:46.316 "params": { 00:20:46.316 "impl_name": "posix", 00:20:46.316 "recv_buf_size": 2097152, 00:20:46.316 "send_buf_size": 2097152, 00:20:46.316 "enable_recv_pipe": true, 00:20:46.316 "enable_quickack": false, 00:20:46.316 "enable_placement_id": 0, 00:20:46.316 "enable_zerocopy_send_server": true, 00:20:46.316 "enable_zerocopy_send_client": false, 00:20:46.316 "zerocopy_threshold": 0, 00:20:46.316 "tls_version": 0, 00:20:46.316 "enable_ktls": false 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "sock_impl_set_options", 00:20:46.316 "params": { 00:20:46.316 "impl_name": "ssl", 00:20:46.316 "recv_buf_size": 4096, 00:20:46.316 "send_buf_size": 4096, 00:20:46.316 "enable_recv_pipe": true, 00:20:46.316 "enable_quickack": false, 00:20:46.316 "enable_placement_id": 0, 00:20:46.316 "enable_zerocopy_send_server": true, 00:20:46.316 "enable_zerocopy_send_client": false, 00:20:46.316 "zerocopy_threshold": 0, 00:20:46.316 "tls_version": 0, 00:20:46.316 "enable_ktls": false 00:20:46.316 } 00:20:46.316 } 00:20:46.316 ] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "vmd", 00:20:46.316 "config": [] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "accel", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "accel_set_options", 00:20:46.316 "params": { 00:20:46.316 "small_cache_size": 128, 00:20:46.316 "large_cache_size": 16, 00:20:46.316 "task_count": 2048, 00:20:46.316 "sequence_count": 2048, 00:20:46.316 "buf_count": 2048 00:20:46.316 } 00:20:46.316 } 00:20:46.316 ] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "bdev", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "bdev_set_options", 00:20:46.316 "params": { 00:20:46.316 "bdev_io_pool_size": 65535, 00:20:46.316 "bdev_io_cache_size": 256, 00:20:46.316 "bdev_auto_examine": true, 00:20:46.316 "iobuf_small_cache_size": 128, 00:20:46.316 "iobuf_large_cache_size": 16 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_raid_set_options", 00:20:46.316 "params": { 00:20:46.316 "process_window_size_kb": 1024 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_iscsi_set_options", 00:20:46.316 "params": { 00:20:46.316 "timeout_sec": 30 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_nvme_set_options", 00:20:46.316 "params": { 00:20:46.316 "action_on_timeout": "none", 00:20:46.316 "timeout_us": 0, 00:20:46.316 "timeout_admin_us": 0, 00:20:46.316 "keep_alive_timeout_ms": 10000, 00:20:46.316 "transport_retry_count": 4, 00:20:46.316 "arbitration_burst": 0, 00:20:46.316 "low_priority_weight": 0, 00:20:46.316 "medium_priority_weight": 0, 00:20:46.316 "high_priority_weight": 0, 00:20:46.316 "nvme_adminq_poll_period_us": 10000, 00:20:46.316 "nvme_ioq_poll_period_us": 0, 00:20:46.316 "io_queue_requests": 0, 00:20:46.316 "delay_cmd_submit": true, 00:20:46.316 "bdev_retry_count": 3, 00:20:46.316 "transport_ack_timeout": 0, 00:20:46.316 "ctrlr_loss_timeout_sec": 0, 00:20:46.316 "reconnect_delay_sec": 0, 00:20:46.316 "fast_io_fail_timeout_sec": 0, 00:20:46.316 "generate_uuids": false, 00:20:46.316 "transport_tos": 0, 00:20:46.316 "io_path_stat": false, 00:20:46.316 "allow_accel_sequence": false 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_nvme_set_hotplug", 00:20:46.316 "params": { 00:20:46.316 "period_us": 100000, 00:20:46.316 "enable": false 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_malloc_create", 00:20:46.316 "params": { 00:20:46.316 "name": "malloc0", 00:20:46.316 "num_blocks": 8192, 00:20:46.316 "block_size": 4096, 00:20:46.316 "physical_block_size": 4096, 00:20:46.316 "uuid": "88a7241d-25a7-433c-bd35-8dcd021c9563", 00:20:46.316 "optimal_io_boundary": 0 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "bdev_wait_for_examine" 00:20:46.316 } 00:20:46.316 ] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "nbd", 00:20:46.316 "config": [] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "scheduler", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "framework_set_scheduler", 00:20:46.316 "params": { 00:20:46.316 "name": "static" 00:20:46.316 } 00:20:46.316 } 00:20:46.316 ] 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "subsystem": "nvmf", 00:20:46.316 "config": [ 00:20:46.316 { 00:20:46.316 "method": "nvmf_set_config", 00:20:46.316 "params": { 00:20:46.316 "discovery_filter": "match_any", 00:20:46.316 "admin_cmd_passthru": { 00:20:46.316 "identify_ctrlr": false 00:20:46.316 } 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_set_max_subsystems", 00:20:46.316 "params": { 00:20:46.316 "max_subsystems": 1024 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_set_crdt", 00:20:46.316 "params": { 00:20:46.316 "crdt1": 0, 00:20:46.316 "crdt2": 0, 00:20:46.316 "crdt3": 0 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_create_transport", 00:20:46.316 "params": { 00:20:46.316 "trtype": "TCP", 00:20:46.316 "max_queue_depth": 128, 00:20:46.316 "max_io_qpairs_per_ctrlr": 127, 00:20:46.316 "in_capsule_data_size": 4096, 00:20:46.316 "max_io_size": 131072, 00:20:46.316 "io_unit_size": 131072, 00:20:46.316 "max_aq_depth": 128, 00:20:46.316 "num_shared_buffers": 511, 00:20:46.316 "buf_cache_size": 4294967295, 00:20:46.316 "dif_insert_or_strip": false, 00:20:46.316 "zcopy": false, 00:20:46.316 "c2h_success": false, 00:20:46.316 "sock_priority": 0, 00:20:46.316 "abort_timeout_sec": 1 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_create_subsystem", 00:20:46.316 "params": { 00:20:46.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.316 "allow_any_host": false, 00:20:46.316 "serial_number": "SPDK00000000000001", 00:20:46.316 "model_number": "SPDK bdev Controller", 00:20:46.316 "max_namespaces": 10, 00:20:46.316 "min_cntlid": 1, 00:20:46.316 "max_cntlid": 65519, 00:20:46.316 "ana_reporting": false 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_subsystem_add_host", 00:20:46.316 "params": { 00:20:46.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.316 "host": "nqn.2016-06.io.spdk:host1", 00:20:46.316 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_subsystem_add_ns", 00:20:46.316 "params": { 00:20:46.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.316 "namespace": { 00:20:46.316 "nsid": 1, 00:20:46.316 "bdev_name": "malloc0", 00:20:46.316 "nguid": "88A7241D25A7433CBD358DCD021C9563", 00:20:46.316 "uuid": "88a7241d-25a7-433c-bd35-8dcd021c9563" 00:20:46.316 } 00:20:46.316 } 00:20:46.316 }, 00:20:46.316 { 00:20:46.316 "method": "nvmf_subsystem_add_listener", 00:20:46.316 "params": { 00:20:46.316 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.316 "listen_address": { 00:20:46.317 "trtype": "TCP", 00:20:46.317 "adrfam": "IPv4", 00:20:46.317 "traddr": "10.0.0.2", 00:20:46.317 "trsvcid": "4420" 00:20:46.317 }, 00:20:46.317 "secure_channel": true 00:20:46.317 } 00:20:46.317 } 00:20:46.317 ] 00:20:46.317 } 00:20:46.317 ] 00:20:46.317 }' 00:20:46.317 01:55:31 -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:46.883 01:55:32 -- target/tls.sh@206 -- # bdevperfconf='{ 00:20:46.883 "subsystems": [ 00:20:46.883 { 00:20:46.883 "subsystem": "iobuf", 00:20:46.883 "config": [ 00:20:46.883 { 00:20:46.883 "method": "iobuf_set_options", 00:20:46.883 "params": { 00:20:46.883 "small_pool_count": 8192, 00:20:46.883 "large_pool_count": 1024, 00:20:46.883 "small_bufsize": 8192, 00:20:46.883 "large_bufsize": 135168 00:20:46.883 } 00:20:46.883 } 00:20:46.883 ] 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "subsystem": "sock", 00:20:46.883 "config": [ 00:20:46.883 { 00:20:46.883 "method": "sock_impl_set_options", 00:20:46.883 "params": { 00:20:46.883 "impl_name": "posix", 00:20:46.883 "recv_buf_size": 2097152, 00:20:46.883 "send_buf_size": 2097152, 00:20:46.883 "enable_recv_pipe": true, 00:20:46.883 "enable_quickack": false, 00:20:46.883 "enable_placement_id": 0, 00:20:46.883 "enable_zerocopy_send_server": true, 00:20:46.883 "enable_zerocopy_send_client": false, 00:20:46.883 "zerocopy_threshold": 0, 00:20:46.883 "tls_version": 0, 00:20:46.883 "enable_ktls": false 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "sock_impl_set_options", 00:20:46.883 "params": { 00:20:46.883 "impl_name": "ssl", 00:20:46.883 "recv_buf_size": 4096, 00:20:46.883 "send_buf_size": 4096, 00:20:46.883 "enable_recv_pipe": true, 00:20:46.883 "enable_quickack": false, 00:20:46.883 "enable_placement_id": 0, 00:20:46.883 "enable_zerocopy_send_server": true, 00:20:46.883 "enable_zerocopy_send_client": false, 00:20:46.883 "zerocopy_threshold": 0, 00:20:46.883 "tls_version": 0, 00:20:46.883 "enable_ktls": false 00:20:46.883 } 00:20:46.883 } 00:20:46.883 ] 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "subsystem": "vmd", 00:20:46.883 "config": [] 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "subsystem": "accel", 00:20:46.883 "config": [ 00:20:46.883 { 00:20:46.883 "method": "accel_set_options", 00:20:46.883 "params": { 00:20:46.883 "small_cache_size": 128, 00:20:46.883 "large_cache_size": 16, 00:20:46.883 "task_count": 2048, 00:20:46.883 "sequence_count": 2048, 00:20:46.883 "buf_count": 2048 00:20:46.883 } 00:20:46.883 } 00:20:46.883 ] 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "subsystem": "bdev", 00:20:46.883 "config": [ 00:20:46.883 { 00:20:46.883 "method": "bdev_set_options", 00:20:46.883 "params": { 00:20:46.883 "bdev_io_pool_size": 65535, 00:20:46.883 "bdev_io_cache_size": 256, 00:20:46.883 "bdev_auto_examine": true, 00:20:46.883 "iobuf_small_cache_size": 128, 00:20:46.883 "iobuf_large_cache_size": 16 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_raid_set_options", 00:20:46.883 "params": { 00:20:46.883 "process_window_size_kb": 1024 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_iscsi_set_options", 00:20:46.883 "params": { 00:20:46.883 "timeout_sec": 30 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_nvme_set_options", 00:20:46.883 "params": { 00:20:46.883 "action_on_timeout": "none", 00:20:46.883 "timeout_us": 0, 00:20:46.883 "timeout_admin_us": 0, 00:20:46.883 "keep_alive_timeout_ms": 10000, 00:20:46.883 "transport_retry_count": 4, 00:20:46.883 "arbitration_burst": 0, 00:20:46.883 "low_priority_weight": 0, 00:20:46.883 "medium_priority_weight": 0, 00:20:46.883 "high_priority_weight": 0, 00:20:46.883 "nvme_adminq_poll_period_us": 10000, 00:20:46.883 "nvme_ioq_poll_period_us": 0, 00:20:46.883 "io_queue_requests": 512, 00:20:46.883 "delay_cmd_submit": true, 00:20:46.883 "bdev_retry_count": 3, 00:20:46.883 "transport_ack_timeout": 0, 00:20:46.883 "ctrlr_loss_timeout_sec": 0, 00:20:46.883 "reconnect_delay_sec": 0, 00:20:46.883 "fast_io_fail_timeout_sec": 0, 00:20:46.883 "generate_uuids": false, 00:20:46.883 "transport_tos": 0, 00:20:46.883 "io_path_stat": false, 00:20:46.883 "allow_accel_sequence": false 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_nvme_attach_controller", 00:20:46.883 "params": { 00:20:46.883 "name": "TLSTEST", 00:20:46.883 "trtype": "TCP", 00:20:46.883 "adrfam": "IPv4", 00:20:46.883 "traddr": "10.0.0.2", 00:20:46.883 "trsvcid": "4420", 00:20:46.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:46.883 "prchk_reftag": false, 00:20:46.883 "prchk_guard": false, 00:20:46.883 "ctrlr_loss_timeout_sec": 0, 00:20:46.883 "reconnect_delay_sec": 0, 00:20:46.883 "fast_io_fail_timeout_sec": 0, 00:20:46.883 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:46.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:46.883 "hdgst": false, 00:20:46.883 "ddgst": false 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_nvme_set_hotplug", 00:20:46.883 "params": { 00:20:46.883 "period_us": 100000, 00:20:46.883 "enable": false 00:20:46.883 } 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "method": "bdev_wait_for_examine" 00:20:46.883 } 00:20:46.883 ] 00:20:46.883 }, 00:20:46.883 { 00:20:46.883 "subsystem": "nbd", 00:20:46.883 "config": [] 00:20:46.883 } 00:20:46.883 ] 00:20:46.883 }' 00:20:46.883 01:55:32 -- target/tls.sh@208 -- # killprocess 2190244 00:20:46.883 01:55:32 -- common/autotest_common.sh@926 -- # '[' -z 2190244 ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@930 -- # kill -0 2190244 00:20:46.883 01:55:32 -- common/autotest_common.sh@931 -- # uname 00:20:46.883 01:55:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2190244 00:20:46.883 01:55:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:46.883 01:55:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2190244' 00:20:46.883 killing process with pid 2190244 00:20:46.883 01:55:32 -- common/autotest_common.sh@945 -- # kill 2190244 00:20:46.883 Received shutdown signal, test time was about 10.000000 seconds 00:20:46.883 00:20:46.883 Latency(us) 00:20:46.883 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:46.883 =================================================================================================================== 00:20:46.883 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:46.883 01:55:32 -- common/autotest_common.sh@950 -- # wait 2190244 00:20:46.883 01:55:32 -- target/tls.sh@209 -- # killprocess 2189899 00:20:46.883 01:55:32 -- common/autotest_common.sh@926 -- # '[' -z 2189899 ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@930 -- # kill -0 2189899 00:20:46.883 01:55:32 -- common/autotest_common.sh@931 -- # uname 00:20:46.883 01:55:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2189899 00:20:46.883 01:55:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:46.883 01:55:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:46.883 01:55:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2189899' 00:20:46.883 killing process with pid 2189899 00:20:46.883 01:55:32 -- common/autotest_common.sh@945 -- # kill 2189899 00:20:46.883 01:55:32 -- common/autotest_common.sh@950 -- # wait 2189899 00:20:47.143 01:55:32 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:20:47.143 01:55:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:47.143 01:55:32 -- target/tls.sh@212 -- # echo '{ 00:20:47.143 "subsystems": [ 00:20:47.143 { 00:20:47.143 "subsystem": "iobuf", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "iobuf_set_options", 00:20:47.143 "params": { 00:20:47.143 "small_pool_count": 8192, 00:20:47.143 "large_pool_count": 1024, 00:20:47.143 "small_bufsize": 8192, 00:20:47.143 "large_bufsize": 135168 00:20:47.143 } 00:20:47.143 } 00:20:47.143 ] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "sock", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "sock_impl_set_options", 00:20:47.143 "params": { 00:20:47.143 "impl_name": "posix", 00:20:47.143 "recv_buf_size": 2097152, 00:20:47.143 "send_buf_size": 2097152, 00:20:47.143 "enable_recv_pipe": true, 00:20:47.143 "enable_quickack": false, 00:20:47.143 "enable_placement_id": 0, 00:20:47.143 "enable_zerocopy_send_server": true, 00:20:47.143 "enable_zerocopy_send_client": false, 00:20:47.143 "zerocopy_threshold": 0, 00:20:47.143 "tls_version": 0, 00:20:47.143 "enable_ktls": false 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "sock_impl_set_options", 00:20:47.143 "params": { 00:20:47.143 "impl_name": "ssl", 00:20:47.143 "recv_buf_size": 4096, 00:20:47.143 "send_buf_size": 4096, 00:20:47.143 "enable_recv_pipe": true, 00:20:47.143 "enable_quickack": false, 00:20:47.143 "enable_placement_id": 0, 00:20:47.143 "enable_zerocopy_send_server": true, 00:20:47.143 "enable_zerocopy_send_client": false, 00:20:47.143 "zerocopy_threshold": 0, 00:20:47.143 "tls_version": 0, 00:20:47.143 "enable_ktls": false 00:20:47.143 } 00:20:47.143 } 00:20:47.143 ] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "vmd", 00:20:47.143 "config": [] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "accel", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "accel_set_options", 00:20:47.143 "params": { 00:20:47.143 "small_cache_size": 128, 00:20:47.143 "large_cache_size": 16, 00:20:47.143 "task_count": 2048, 00:20:47.143 "sequence_count": 2048, 00:20:47.143 "buf_count": 2048 00:20:47.143 } 00:20:47.143 } 00:20:47.143 ] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "bdev", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "bdev_set_options", 00:20:47.143 "params": { 00:20:47.143 "bdev_io_pool_size": 65535, 00:20:47.143 "bdev_io_cache_size": 256, 00:20:47.143 "bdev_auto_examine": true, 00:20:47.143 "iobuf_small_cache_size": 128, 00:20:47.143 "iobuf_large_cache_size": 16 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_raid_set_options", 00:20:47.143 "params": { 00:20:47.143 "process_window_size_kb": 1024 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_iscsi_set_options", 00:20:47.143 "params": { 00:20:47.143 "timeout_sec": 30 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_nvme_set_options", 00:20:47.143 "params": { 00:20:47.143 "action_on_timeout": "none", 00:20:47.143 "timeout_us": 0, 00:20:47.143 "timeout_admin_us": 0, 00:20:47.143 "keep_alive_timeout_ms": 10000, 00:20:47.143 "transport_retry_count": 4, 00:20:47.143 "arbitration_burst": 0, 00:20:47.143 "low_priority_weight": 0, 00:20:47.143 "medium_priority_weight": 0, 00:20:47.143 "high_priority_weight": 0, 00:20:47.143 "nvme_adminq_poll_period_us": 10000, 00:20:47.143 "nvme_ioq_poll_period_us": 0, 00:20:47.143 "io_queue_requests": 0, 00:20:47.143 "delay_cmd_submit": true, 00:20:47.143 "bdev_retry_count": 3, 00:20:47.143 "transport_ack_timeout": 0, 00:20:47.143 "ctrlr_loss_timeout_sec": 0, 00:20:47.143 "reconnect_delay_sec": 0, 00:20:47.143 "fast_io_fail_timeout_sec": 0, 00:20:47.143 "generate_uuids": false, 00:20:47.143 "transport_tos": 0, 00:20:47.143 "io_path_stat": false, 00:20:47.143 "allow_accel_sequence": false 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_nvme_set_hotplug", 00:20:47.143 "params": { 00:20:47.143 "period_us": 100000, 00:20:47.143 "enable": false 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_malloc_create", 00:20:47.143 "params": { 00:20:47.143 "name": "malloc0", 00:20:47.143 "num_blocks": 8192, 00:20:47.143 "block_size": 4096, 00:20:47.143 "physical_block_size": 4096, 00:20:47.143 "uuid": "88a7241d-25a7-433c-bd35-8dcd021c9563", 00:20:47.143 "optimal_io_boundary": 0 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "bdev_wait_for_examine" 00:20:47.143 } 00:20:47.143 ] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "nbd", 00:20:47.143 "config": [] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "scheduler", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "framework_set_scheduler", 00:20:47.143 "params": { 00:20:47.143 "name": "static" 00:20:47.143 } 00:20:47.143 } 00:20:47.143 ] 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "subsystem": "nvmf", 00:20:47.143 "config": [ 00:20:47.143 { 00:20:47.143 "method": "nvmf_set_config", 00:20:47.143 "params": { 00:20:47.143 "discovery_filter": "match_any", 00:20:47.143 "admin_cmd_passthru": { 00:20:47.143 "identify_ctrlr": false 00:20:47.143 } 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "nvmf_set_max_subsystems", 00:20:47.143 "params": { 00:20:47.143 "max_subsystems": 1024 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "nvmf_set_crdt", 00:20:47.143 "params": { 00:20:47.143 "crdt1": 0, 00:20:47.143 "crdt2": 0, 00:20:47.143 "crdt3": 0 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "nvmf_create_transport", 00:20:47.143 "params": { 00:20:47.143 "trtype": "TCP", 00:20:47.143 "max_queue_depth": 128, 00:20:47.143 "max_io_qpairs_per_ctrlr": 127, 00:20:47.143 "in_capsule_data_size": 4096, 00:20:47.143 "max_io_size": 131072, 00:20:47.143 "io_unit_size": 131072, 00:20:47.143 "max_aq_depth": 128, 00:20:47.143 "num_shared_buffers": 511, 00:20:47.143 "buf_cache_size": 4294967295, 00:20:47.143 "dif_insert_or_strip": false, 00:20:47.143 "zcopy": false, 00:20:47.143 "c2h_success": false, 00:20:47.143 "sock_priority": 0, 00:20:47.143 "abort_timeout_sec": 1 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "nvmf_create_subsystem", 00:20:47.143 "params": { 00:20:47.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.143 "allow_any_host": false, 00:20:47.143 "serial_number": "SPDK00000000000001", 00:20:47.143 "model_number": "SPDK bdev Controller", 00:20:47.143 "max_namespaces": 10, 00:20:47.143 "min_cntlid": 1, 00:20:47.143 "max_cntlid": 65519, 00:20:47.143 "ana_reporting": false 00:20:47.143 } 00:20:47.143 }, 00:20:47.143 { 00:20:47.143 "method": "nvmf_subsystem_add_host", 00:20:47.143 "params": { 00:20:47.143 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.143 "host": "nqn.2016-06.io.spdk:host1", 00:20:47.144 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt" 00:20:47.144 } 00:20:47.144 }, 00:20:47.144 { 00:20:47.144 "method": "nvmf_subsystem_add_ns", 00:20:47.144 "params": { 00:20:47.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.144 "namespace": { 00:20:47.144 "nsid": 1, 00:20:47.144 "bdev_name": "malloc0", 00:20:47.144 "nguid": "88A7241D25A7433CBD358DCD021C9563", 00:20:47.144 "uuid": "88a7241d-25a7-433c-bd35-8dcd021c9563" 00:20:47.144 } 00:20:47.144 } 00:20:47.144 }, 00:20:47.144 { 00:20:47.144 "method": "nvmf_subsystem_add_listener", 00:20:47.144 "params": { 00:20:47.144 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.144 "listen_address": { 00:20:47.144 "trtype": "TCP", 00:20:47.144 "adrfam": "IPv4", 00:20:47.144 "traddr": "10.0.0.2", 00:20:47.144 "trsvcid": "4420" 00:20:47.144 }, 00:20:47.144 "secure_channel": true 00:20:47.144 } 00:20:47.144 } 00:20:47.144 ] 00:20:47.144 } 00:20:47.144 ] 00:20:47.144 }' 00:20:47.144 01:55:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:47.144 01:55:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.144 01:55:32 -- nvmf/common.sh@469 -- # nvmfpid=2190543 00:20:47.144 01:55:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:20:47.144 01:55:32 -- nvmf/common.sh@470 -- # waitforlisten 2190543 00:20:47.144 01:55:32 -- common/autotest_common.sh@819 -- # '[' -z 2190543 ']' 00:20:47.144 01:55:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.144 01:55:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:47.144 01:55:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.144 01:55:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:47.144 01:55:32 -- common/autotest_common.sh@10 -- # set +x 00:20:47.403 [2024-04-15 01:55:32.809069] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:47.403 [2024-04-15 01:55:32.809175] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.403 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.403 [2024-04-15 01:55:32.875574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.403 [2024-04-15 01:55:32.960124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:47.403 [2024-04-15 01:55:32.960281] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.403 [2024-04-15 01:55:32.960298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.403 [2024-04-15 01:55:32.960310] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.403 [2024-04-15 01:55:32.960337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.663 [2024-04-15 01:55:33.188549] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:47.663 [2024-04-15 01:55:33.220573] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:47.663 [2024-04-15 01:55:33.220796] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.233 01:55:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:48.233 01:55:33 -- common/autotest_common.sh@852 -- # return 0 00:20:48.233 01:55:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:48.233 01:55:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:48.233 01:55:33 -- common/autotest_common.sh@10 -- # set +x 00:20:48.233 01:55:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.233 01:55:33 -- target/tls.sh@216 -- # bdevperf_pid=2190698 00:20:48.233 01:55:33 -- target/tls.sh@217 -- # waitforlisten 2190698 /var/tmp/bdevperf.sock 00:20:48.233 01:55:33 -- common/autotest_common.sh@819 -- # '[' -z 2190698 ']' 00:20:48.233 01:55:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.233 01:55:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:48.233 01:55:33 -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:20:48.233 01:55:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.233 01:55:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:48.233 01:55:33 -- target/tls.sh@213 -- # echo '{ 00:20:48.233 "subsystems": [ 00:20:48.233 { 00:20:48.233 "subsystem": "iobuf", 00:20:48.233 "config": [ 00:20:48.233 { 00:20:48.233 "method": "iobuf_set_options", 00:20:48.233 "params": { 00:20:48.233 "small_pool_count": 8192, 00:20:48.234 "large_pool_count": 1024, 00:20:48.234 "small_bufsize": 8192, 00:20:48.234 "large_bufsize": 135168 00:20:48.234 } 00:20:48.234 } 00:20:48.234 ] 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "subsystem": "sock", 00:20:48.234 "config": [ 00:20:48.234 { 00:20:48.234 "method": "sock_impl_set_options", 00:20:48.234 "params": { 00:20:48.234 "impl_name": "posix", 00:20:48.234 "recv_buf_size": 2097152, 00:20:48.234 "send_buf_size": 2097152, 00:20:48.234 "enable_recv_pipe": true, 00:20:48.234 "enable_quickack": false, 00:20:48.234 "enable_placement_id": 0, 00:20:48.234 "enable_zerocopy_send_server": true, 00:20:48.234 "enable_zerocopy_send_client": false, 00:20:48.234 "zerocopy_threshold": 0, 00:20:48.234 "tls_version": 0, 00:20:48.234 "enable_ktls": false 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "sock_impl_set_options", 00:20:48.234 "params": { 00:20:48.234 "impl_name": "ssl", 00:20:48.234 "recv_buf_size": 4096, 00:20:48.234 "send_buf_size": 4096, 00:20:48.234 "enable_recv_pipe": true, 00:20:48.234 "enable_quickack": false, 00:20:48.234 "enable_placement_id": 0, 00:20:48.234 "enable_zerocopy_send_server": true, 00:20:48.234 "enable_zerocopy_send_client": false, 00:20:48.234 "zerocopy_threshold": 0, 00:20:48.234 "tls_version": 0, 00:20:48.234 "enable_ktls": false 00:20:48.234 } 00:20:48.234 } 00:20:48.234 ] 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "subsystem": "vmd", 00:20:48.234 "config": [] 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "subsystem": "accel", 00:20:48.234 "config": [ 00:20:48.234 { 00:20:48.234 "method": "accel_set_options", 00:20:48.234 "params": { 00:20:48.234 "small_cache_size": 128, 00:20:48.234 "large_cache_size": 16, 00:20:48.234 "task_count": 2048, 00:20:48.234 "sequence_count": 2048, 00:20:48.234 "buf_count": 2048 00:20:48.234 } 00:20:48.234 } 00:20:48.234 ] 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "subsystem": "bdev", 00:20:48.234 "config": [ 00:20:48.234 { 00:20:48.234 "method": "bdev_set_options", 00:20:48.234 "params": { 00:20:48.234 "bdev_io_pool_size": 65535, 00:20:48.234 "bdev_io_cache_size": 256, 00:20:48.234 "bdev_auto_examine": true, 00:20:48.234 "iobuf_small_cache_size": 128, 00:20:48.234 "iobuf_large_cache_size": 16 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_raid_set_options", 00:20:48.234 "params": { 00:20:48.234 "process_window_size_kb": 1024 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_iscsi_set_options", 00:20:48.234 "params": { 00:20:48.234 "timeout_sec": 30 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_nvme_set_options", 00:20:48.234 "params": { 00:20:48.234 "action_on_timeout": "none", 00:20:48.234 "timeout_us": 0, 00:20:48.234 "timeout_admin_us": 0, 00:20:48.234 "keep_alive_timeout_ms": 10000, 00:20:48.234 "transport_retry_count": 4, 00:20:48.234 "arbitration_burst": 0, 00:20:48.234 "low_priority_weight": 0, 00:20:48.234 "medium_priority_weight": 0, 00:20:48.234 "high_priority_weight": 0, 00:20:48.234 "nvme_adminq_poll_period_us": 10000, 00:20:48.234 "nvme_ioq_poll_period_us": 0, 00:20:48.234 "io_queue_requests": 512, 00:20:48.234 "delay_cmd_submit": true, 00:20:48.234 "bdev_retry_count": 3, 00:20:48.234 "transport_ack_timeout": 0, 00:20:48.234 "ctrlr_loss_timeout_sec": 0, 00:20:48.234 "reconnect_delay_sec": 0, 00:20:48.234 "fast_io_fail_timeout_sec": 0, 00:20:48.234 "generate_uuids": false, 00:20:48.234 "transport_tos": 0, 00:20:48.234 "io_path_stat": false, 00:20:48.234 "allow_accel_sequence": false 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_nvme_attach_controller", 00:20:48.234 "params": { 00:20:48.234 "name": "TLSTEST", 00:20:48.234 "trtype": "TCP", 00:20:48.234 "adrfam": "IPv4", 00:20:48.234 "traddr": "10.0.0.2", 00:20:48.234 "trsvcid": "4420", 00:20:48.234 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.234 "prchk_reftag": false, 00:20:48.234 "prchk_guard": false, 00:20:48.234 "ctrlr_loss_timeout_sec": 0, 00:20:48.234 "reconnect_delay_sec": 0, 00:20:48.234 "fast_io_fail_timeout_sec": 0, 00:20:48.234 "psk": "/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt", 00:20:48.234 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:48.234 "hdgst": false, 00:20:48.234 "ddgst": false 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_nvme_set_hotplug", 00:20:48.234 "params": { 00:20:48.234 "period_us": 100000, 00:20:48.234 "enable": false 00:20:48.234 } 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "method": "bdev_wait_for_examine" 00:20:48.234 } 00:20:48.234 ] 00:20:48.234 }, 00:20:48.234 { 00:20:48.234 "subsystem": "nbd", 00:20:48.234 "config": [] 00:20:48.234 } 00:20:48.234 ] 00:20:48.234 }' 00:20:48.234 01:55:33 -- common/autotest_common.sh@10 -- # set +x 00:20:48.234 [2024-04-15 01:55:33.814681] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:20:48.234 [2024-04-15 01:55:33.814762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190698 ] 00:20:48.234 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.234 [2024-04-15 01:55:33.871815] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.503 [2024-04-15 01:55:33.955818] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:48.503 [2024-04-15 01:55:34.105204] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.464 01:55:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:49.464 01:55:34 -- common/autotest_common.sh@852 -- # return 0 00:20:49.464 01:55:34 -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:49.464 Running I/O for 10 seconds... 00:20:59.440 00:20:59.440 Latency(us) 00:20:59.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:59.440 Verification LBA range: start 0x0 length 0x2000 00:20:59.440 TLSTESTn1 : 10.06 1103.92 4.31 0.00 0.00 115706.40 6553.60 147577.36 00:20:59.440 =================================================================================================================== 00:20:59.440 Total : 1103.92 4.31 0.00 0.00 115706.40 6553.60 147577.36 00:20:59.440 0 00:20:59.440 01:55:44 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:59.440 01:55:44 -- target/tls.sh@223 -- # killprocess 2190698 00:20:59.440 01:55:44 -- common/autotest_common.sh@926 -- # '[' -z 2190698 ']' 00:20:59.440 01:55:44 -- common/autotest_common.sh@930 -- # kill -0 2190698 00:20:59.440 01:55:44 -- common/autotest_common.sh@931 -- # uname 00:20:59.440 01:55:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.440 01:55:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2190698 00:20:59.440 01:55:44 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:20:59.440 01:55:44 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:20:59.440 01:55:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2190698' 00:20:59.440 killing process with pid 2190698 00:20:59.440 01:55:44 -- common/autotest_common.sh@945 -- # kill 2190698 00:20:59.440 Received shutdown signal, test time was about 10.000000 seconds 00:20:59.440 00:20:59.440 Latency(us) 00:20:59.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:59.440 =================================================================================================================== 00:20:59.440 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:59.440 01:55:44 -- common/autotest_common.sh@950 -- # wait 2190698 00:20:59.700 01:55:45 -- target/tls.sh@224 -- # killprocess 2190543 00:20:59.701 01:55:45 -- common/autotest_common.sh@926 -- # '[' -z 2190543 ']' 00:20:59.701 01:55:45 -- common/autotest_common.sh@930 -- # kill -0 2190543 00:20:59.701 01:55:45 -- common/autotest_common.sh@931 -- # uname 00:20:59.701 01:55:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:59.701 01:55:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2190543 00:20:59.701 01:55:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:20:59.701 01:55:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:20:59.701 01:55:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2190543' 00:20:59.701 killing process with pid 2190543 00:20:59.701 01:55:45 -- common/autotest_common.sh@945 -- # kill 2190543 00:20:59.701 01:55:45 -- common/autotest_common.sh@950 -- # wait 2190543 00:20:59.962 01:55:45 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:20:59.962 01:55:45 -- target/tls.sh@227 -- # cleanup 00:20:59.962 01:55:45 -- target/tls.sh@15 -- # process_shm --id 0 00:20:59.962 01:55:45 -- common/autotest_common.sh@796 -- # type=--id 00:20:59.962 01:55:45 -- common/autotest_common.sh@797 -- # id=0 00:20:59.962 01:55:45 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:20:59.962 01:55:45 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:59.962 01:55:45 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:20:59.962 01:55:45 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:20:59.962 01:55:45 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:20:59.962 01:55:45 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:59.962 nvmf_trace.0 00:20:59.962 01:55:45 -- common/autotest_common.sh@811 -- # return 0 00:20:59.962 01:55:45 -- target/tls.sh@16 -- # killprocess 2190698 00:20:59.962 01:55:45 -- common/autotest_common.sh@926 -- # '[' -z 2190698 ']' 00:20:59.962 01:55:45 -- common/autotest_common.sh@930 -- # kill -0 2190698 00:20:59.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2190698) - No such process 00:20:59.962 01:55:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2190698 is not found' 00:20:59.962 Process with pid 2190698 is not found 00:20:59.962 01:55:45 -- target/tls.sh@17 -- # nvmftestfini 00:20:59.962 01:55:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:59.962 01:55:45 -- nvmf/common.sh@116 -- # sync 00:20:59.962 01:55:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:59.962 01:55:45 -- nvmf/common.sh@119 -- # set +e 00:20:59.962 01:55:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:59.962 01:55:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:59.962 rmmod nvme_tcp 00:20:59.962 rmmod nvme_fabrics 00:20:59.962 rmmod nvme_keyring 00:20:59.962 01:55:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:59.962 01:55:45 -- nvmf/common.sh@123 -- # set -e 00:20:59.962 01:55:45 -- nvmf/common.sh@124 -- # return 0 00:20:59.962 01:55:45 -- nvmf/common.sh@477 -- # '[' -n 2190543 ']' 00:20:59.962 01:55:45 -- nvmf/common.sh@478 -- # killprocess 2190543 00:20:59.962 01:55:45 -- common/autotest_common.sh@926 -- # '[' -z 2190543 ']' 00:20:59.962 01:55:45 -- common/autotest_common.sh@930 -- # kill -0 2190543 00:20:59.962 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2190543) - No such process 00:20:59.962 01:55:45 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2190543 is not found' 00:20:59.962 Process with pid 2190543 is not found 00:20:59.962 01:55:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:59.962 01:55:45 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:59.962 01:55:45 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:59.962 01:55:45 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:59.962 01:55:45 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:59.962 01:55:45 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:59.962 01:55:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:59.962 01:55:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.505 01:55:47 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:02.505 01:55:47 -- target/tls.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key2.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/key_long.txt 00:21:02.505 00:21:02.505 real 1m13.928s 00:21:02.505 user 1m50.688s 00:21:02.505 sys 0m24.955s 00:21:02.505 01:55:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:02.505 01:55:47 -- common/autotest_common.sh@10 -- # set +x 00:21:02.505 ************************************ 00:21:02.505 END TEST nvmf_tls 00:21:02.505 ************************************ 00:21:02.505 01:55:47 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:02.505 01:55:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:02.505 01:55:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:02.505 01:55:47 -- common/autotest_common.sh@10 -- # set +x 00:21:02.505 ************************************ 00:21:02.505 START TEST nvmf_fips 00:21:02.505 ************************************ 00:21:02.506 01:55:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:02.506 * Looking for test storage... 00:21:02.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:02.506 01:55:47 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:02.506 01:55:47 -- nvmf/common.sh@7 -- # uname -s 00:21:02.506 01:55:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:02.506 01:55:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:02.506 01:55:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:02.506 01:55:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:02.506 01:55:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:02.506 01:55:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:02.506 01:55:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:02.506 01:55:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:02.506 01:55:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:02.506 01:55:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:02.506 01:55:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.506 01:55:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:02.506 01:55:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:02.506 01:55:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:02.506 01:55:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:02.506 01:55:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:02.506 01:55:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:02.506 01:55:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:02.506 01:55:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:02.506 01:55:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.506 01:55:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.506 01:55:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.506 01:55:47 -- paths/export.sh@5 -- # export PATH 00:21:02.506 01:55:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:02.506 01:55:47 -- nvmf/common.sh@46 -- # : 0 00:21:02.506 01:55:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:02.506 01:55:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:02.506 01:55:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:02.506 01:55:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:02.506 01:55:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:02.506 01:55:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:02.506 01:55:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:02.506 01:55:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:02.506 01:55:47 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:02.506 01:55:47 -- fips/fips.sh@89 -- # check_openssl_version 00:21:02.506 01:55:47 -- fips/fips.sh@83 -- # local target=3.0.0 00:21:02.506 01:55:47 -- fips/fips.sh@85 -- # openssl version 00:21:02.506 01:55:47 -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:02.506 01:55:47 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:02.506 01:55:47 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:02.506 01:55:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:02.506 01:55:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:02.506 01:55:47 -- scripts/common.sh@335 -- # IFS=.-: 00:21:02.506 01:55:47 -- scripts/common.sh@335 -- # read -ra ver1 00:21:02.506 01:55:47 -- scripts/common.sh@336 -- # IFS=.-: 00:21:02.506 01:55:47 -- scripts/common.sh@336 -- # read -ra ver2 00:21:02.506 01:55:47 -- scripts/common.sh@337 -- # local 'op=>=' 00:21:02.506 01:55:47 -- scripts/common.sh@339 -- # ver1_l=3 00:21:02.506 01:55:47 -- scripts/common.sh@340 -- # ver2_l=3 00:21:02.506 01:55:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:02.506 01:55:47 -- scripts/common.sh@343 -- # case "$op" in 00:21:02.506 01:55:47 -- scripts/common.sh@347 -- # : 1 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # decimal 3 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=3 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 3 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # ver1[v]=3 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # decimal 3 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=3 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 3 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # ver2[v]=3 00:21:02.506 01:55:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:02.506 01:55:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v++ )) 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # decimal 0 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=0 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 0 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # ver1[v]=0 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # decimal 0 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=0 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 0 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:02.506 01:55:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:02.506 01:55:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v++ )) 00:21:02.506 01:55:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # decimal 9 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=9 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 9 00:21:02.506 01:55:47 -- scripts/common.sh@364 -- # ver1[v]=9 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # decimal 0 00:21:02.506 01:55:47 -- scripts/common.sh@352 -- # local d=0 00:21:02.506 01:55:47 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:02.506 01:55:47 -- scripts/common.sh@354 -- # echo 0 00:21:02.506 01:55:47 -- scripts/common.sh@365 -- # ver2[v]=0 00:21:02.506 01:55:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:02.506 01:55:47 -- scripts/common.sh@366 -- # return 0 00:21:02.506 01:55:47 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:02.506 01:55:47 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:02.506 01:55:47 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:02.506 01:55:47 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:02.506 01:55:47 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:02.506 01:55:47 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:02.506 01:55:47 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:02.506 01:55:47 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:02.506 01:55:47 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:21:02.506 01:55:47 -- fips/fips.sh@114 -- # build_openssl_config 00:21:02.506 01:55:47 -- fips/fips.sh@37 -- # cat 00:21:02.506 01:55:47 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:02.506 01:55:47 -- fips/fips.sh@58 -- # cat - 00:21:02.506 01:55:47 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:02.506 01:55:47 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:21:02.506 01:55:47 -- fips/fips.sh@117 -- # mapfile -t providers 00:21:02.506 01:55:47 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:21:02.506 01:55:47 -- fips/fips.sh@117 -- # openssl list -providers 00:21:02.506 01:55:47 -- fips/fips.sh@117 -- # grep name 00:21:02.506 01:55:47 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:21:02.506 01:55:47 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:21:02.506 01:55:47 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:02.506 01:55:47 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:21:02.506 01:55:47 -- fips/fips.sh@128 -- # : 00:21:02.506 01:55:47 -- common/autotest_common.sh@640 -- # local es=0 00:21:02.506 01:55:47 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:02.506 01:55:47 -- common/autotest_common.sh@628 -- # local arg=openssl 00:21:02.506 01:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:02.506 01:55:47 -- common/autotest_common.sh@632 -- # type -t openssl 00:21:02.506 01:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:02.506 01:55:47 -- common/autotest_common.sh@634 -- # type -P openssl 00:21:02.506 01:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:02.506 01:55:47 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:21:02.506 01:55:47 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:21:02.506 01:55:47 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:21:02.506 Error setting digest 00:21:02.506 00927F1D3D7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:02.506 00927F1D3D7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:02.507 01:55:47 -- common/autotest_common.sh@643 -- # es=1 00:21:02.507 01:55:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:02.507 01:55:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:02.507 01:55:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:02.507 01:55:47 -- fips/fips.sh@131 -- # nvmftestinit 00:21:02.507 01:55:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:02.507 01:55:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.507 01:55:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:02.507 01:55:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:02.507 01:55:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:02.507 01:55:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:02.507 01:55:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:02.507 01:55:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:02.507 01:55:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:02.507 01:55:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:02.507 01:55:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:02.507 01:55:47 -- common/autotest_common.sh@10 -- # set +x 00:21:04.409 01:55:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:04.409 01:55:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:04.409 01:55:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:04.409 01:55:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:04.409 01:55:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:04.409 01:55:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:04.409 01:55:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:04.409 01:55:49 -- nvmf/common.sh@294 -- # net_devs=() 00:21:04.409 01:55:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:04.409 01:55:49 -- nvmf/common.sh@295 -- # e810=() 00:21:04.409 01:55:49 -- nvmf/common.sh@295 -- # local -ga e810 00:21:04.409 01:55:49 -- nvmf/common.sh@296 -- # x722=() 00:21:04.409 01:55:49 -- nvmf/common.sh@296 -- # local -ga x722 00:21:04.409 01:55:49 -- nvmf/common.sh@297 -- # mlx=() 00:21:04.409 01:55:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:04.409 01:55:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:04.409 01:55:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:04.409 01:55:49 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:04.409 01:55:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.409 01:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:04.409 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:04.409 01:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:04.409 01:55:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:04.409 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:04.409 01:55:49 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.409 01:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.409 01:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.409 01:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:04.409 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:04.409 01:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.409 01:55:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:04.409 01:55:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:04.409 01:55:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:04.409 01:55:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:04.409 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:04.409 01:55:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:04.409 01:55:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:04.409 01:55:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:04.409 01:55:49 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:04.409 01:55:49 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:04.409 01:55:49 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:04.409 01:55:49 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:04.409 01:55:49 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:04.409 01:55:49 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:04.409 01:55:49 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:04.409 01:55:49 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:04.409 01:55:49 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:04.409 01:55:49 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:04.409 01:55:49 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:04.409 01:55:49 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:04.409 01:55:49 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:04.409 01:55:49 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:04.409 01:55:49 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:04.409 01:55:49 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:04.409 01:55:49 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:04.409 01:55:49 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:04.409 01:55:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:04.409 01:55:49 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:04.409 01:55:49 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:04.409 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:04.409 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:21:04.409 00:21:04.409 --- 10.0.0.2 ping statistics --- 00:21:04.409 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.409 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:21:04.409 01:55:49 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:04.410 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:04.410 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:04.410 00:21:04.410 --- 10.0.0.1 ping statistics --- 00:21:04.410 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:04.410 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:04.410 01:55:49 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:04.410 01:55:49 -- nvmf/common.sh@410 -- # return 0 00:21:04.410 01:55:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:04.410 01:55:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:04.410 01:55:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:04.410 01:55:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:04.410 01:55:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:04.410 01:55:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:04.410 01:55:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:04.410 01:55:49 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:21:04.410 01:55:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:04.410 01:55:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:04.410 01:55:49 -- common/autotest_common.sh@10 -- # set +x 00:21:04.410 01:55:49 -- nvmf/common.sh@469 -- # nvmfpid=2194040 00:21:04.410 01:55:49 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:04.410 01:55:49 -- nvmf/common.sh@470 -- # waitforlisten 2194040 00:21:04.410 01:55:49 -- common/autotest_common.sh@819 -- # '[' -z 2194040 ']' 00:21:04.410 01:55:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:04.410 01:55:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:04.410 01:55:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:04.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:04.410 01:55:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:04.410 01:55:49 -- common/autotest_common.sh@10 -- # set +x 00:21:04.410 [2024-04-15 01:55:49.959600] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:04.410 [2024-04-15 01:55:49.959670] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:04.410 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.410 [2024-04-15 01:55:50.025411] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:04.668 [2024-04-15 01:55:50.114850] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:04.668 [2024-04-15 01:55:50.115003] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.668 [2024-04-15 01:55:50.115023] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.668 [2024-04-15 01:55:50.115037] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.668 [2024-04-15 01:55:50.115099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.237 01:55:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:05.237 01:55:50 -- common/autotest_common.sh@852 -- # return 0 00:21:05.237 01:55:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:05.237 01:55:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:05.237 01:55:50 -- common/autotest_common.sh@10 -- # set +x 00:21:05.237 01:55:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.237 01:55:50 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:21:05.237 01:55:50 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:05.237 01:55:50 -- fips/fips.sh@138 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.237 01:55:50 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:05.237 01:55:50 -- fips/fips.sh@140 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.237 01:55:50 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.237 01:55:50 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:05.237 01:55:50 -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:05.804 [2024-04-15 01:55:51.147624] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.804 [2024-04-15 01:55:51.163603] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:05.804 [2024-04-15 01:55:51.163814] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.804 malloc0 00:21:05.804 01:55:51 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:05.804 01:55:51 -- fips/fips.sh@148 -- # bdevperf_pid=2194199 00:21:05.804 01:55:51 -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:05.804 01:55:51 -- fips/fips.sh@149 -- # waitforlisten 2194199 /var/tmp/bdevperf.sock 00:21:05.804 01:55:51 -- common/autotest_common.sh@819 -- # '[' -z 2194199 ']' 00:21:05.804 01:55:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:05.804 01:55:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:05.804 01:55:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:05.804 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:05.804 01:55:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:05.804 01:55:51 -- common/autotest_common.sh@10 -- # set +x 00:21:05.804 [2024-04-15 01:55:51.280380] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:05.804 [2024-04-15 01:55:51.280463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2194199 ] 00:21:05.804 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.804 [2024-04-15 01:55:51.340442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:05.804 [2024-04-15 01:55:51.427467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:06.739 01:55:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:06.739 01:55:52 -- common/autotest_common.sh@852 -- # return 0 00:21:06.740 01:55:52 -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:06.998 [2024-04-15 01:55:52.488946] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:06.998 TLSTESTn1 00:21:06.998 01:55:52 -- fips/fips.sh@155 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:07.257 Running I/O for 10 seconds... 00:21:17.272 00:21:17.272 Latency(us) 00:21:17.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.272 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:17.272 Verification LBA range: start 0x0 length 0x2000 00:21:17.272 TLSTESTn1 : 10.06 1120.65 4.38 0.00 0.00 114013.79 12184.84 147577.36 00:21:17.272 =================================================================================================================== 00:21:17.272 Total : 1120.65 4.38 0.00 0.00 114013.79 12184.84 147577.36 00:21:17.272 0 00:21:17.272 01:56:02 -- fips/fips.sh@1 -- # cleanup 00:21:17.272 01:56:02 -- fips/fips.sh@15 -- # process_shm --id 0 00:21:17.272 01:56:02 -- common/autotest_common.sh@796 -- # type=--id 00:21:17.272 01:56:02 -- common/autotest_common.sh@797 -- # id=0 00:21:17.272 01:56:02 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:21:17.272 01:56:02 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:17.272 01:56:02 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:21:17.272 01:56:02 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:21:17.272 01:56:02 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:21:17.272 01:56:02 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:17.272 nvmf_trace.0 00:21:17.272 01:56:02 -- common/autotest_common.sh@811 -- # return 0 00:21:17.272 01:56:02 -- fips/fips.sh@16 -- # killprocess 2194199 00:21:17.272 01:56:02 -- common/autotest_common.sh@926 -- # '[' -z 2194199 ']' 00:21:17.272 01:56:02 -- common/autotest_common.sh@930 -- # kill -0 2194199 00:21:17.272 01:56:02 -- common/autotest_common.sh@931 -- # uname 00:21:17.272 01:56:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:17.272 01:56:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2194199 00:21:17.272 01:56:02 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:21:17.272 01:56:02 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:21:17.272 01:56:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2194199' 00:21:17.272 killing process with pid 2194199 00:21:17.272 01:56:02 -- common/autotest_common.sh@945 -- # kill 2194199 00:21:17.272 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.272 00:21:17.272 Latency(us) 00:21:17.272 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.272 =================================================================================================================== 00:21:17.272 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.272 01:56:02 -- common/autotest_common.sh@950 -- # wait 2194199 00:21:17.532 01:56:03 -- fips/fips.sh@17 -- # nvmftestfini 00:21:17.532 01:56:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:17.532 01:56:03 -- nvmf/common.sh@116 -- # sync 00:21:17.532 01:56:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:17.532 01:56:03 -- nvmf/common.sh@119 -- # set +e 00:21:17.532 01:56:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:17.532 01:56:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:17.532 rmmod nvme_tcp 00:21:17.532 rmmod nvme_fabrics 00:21:17.532 rmmod nvme_keyring 00:21:17.532 01:56:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:17.532 01:56:03 -- nvmf/common.sh@123 -- # set -e 00:21:17.532 01:56:03 -- nvmf/common.sh@124 -- # return 0 00:21:17.532 01:56:03 -- nvmf/common.sh@477 -- # '[' -n 2194040 ']' 00:21:17.532 01:56:03 -- nvmf/common.sh@478 -- # killprocess 2194040 00:21:17.532 01:56:03 -- common/autotest_common.sh@926 -- # '[' -z 2194040 ']' 00:21:17.532 01:56:03 -- common/autotest_common.sh@930 -- # kill -0 2194040 00:21:17.532 01:56:03 -- common/autotest_common.sh@931 -- # uname 00:21:17.532 01:56:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:17.532 01:56:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2194040 00:21:17.532 01:56:03 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:17.532 01:56:03 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:17.532 01:56:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2194040' 00:21:17.532 killing process with pid 2194040 00:21:17.532 01:56:03 -- common/autotest_common.sh@945 -- # kill 2194040 00:21:17.532 01:56:03 -- common/autotest_common.sh@950 -- # wait 2194040 00:21:17.791 01:56:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:17.792 01:56:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:17.792 01:56:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:17.792 01:56:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:17.792 01:56:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:17.792 01:56:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:17.792 01:56:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:17.792 01:56:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.334 01:56:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:20.334 01:56:05 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:20.334 00:21:20.334 real 0m17.837s 00:21:20.334 user 0m23.138s 00:21:20.334 sys 0m6.163s 00:21:20.334 01:56:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:20.334 01:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:20.334 ************************************ 00:21:20.334 END TEST nvmf_fips 00:21:20.334 ************************************ 00:21:20.334 01:56:05 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:21:20.334 01:56:05 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:20.334 01:56:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:20.334 01:56:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:20.334 01:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:20.334 ************************************ 00:21:20.334 START TEST nvmf_fuzz 00:21:20.334 ************************************ 00:21:20.334 01:56:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:21:20.334 * Looking for test storage... 00:21:20.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:20.334 01:56:05 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:20.334 01:56:05 -- nvmf/common.sh@7 -- # uname -s 00:21:20.334 01:56:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:20.334 01:56:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:20.334 01:56:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:20.334 01:56:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:20.334 01:56:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:20.334 01:56:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:20.334 01:56:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:20.334 01:56:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:20.334 01:56:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:20.334 01:56:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:20.334 01:56:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.334 01:56:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:20.334 01:56:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:20.334 01:56:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:20.334 01:56:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:20.334 01:56:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:20.334 01:56:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:20.335 01:56:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:20.335 01:56:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:20.335 01:56:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.335 01:56:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.335 01:56:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.335 01:56:05 -- paths/export.sh@5 -- # export PATH 00:21:20.335 01:56:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:20.335 01:56:05 -- nvmf/common.sh@46 -- # : 0 00:21:20.335 01:56:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:20.335 01:56:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:20.335 01:56:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:20.335 01:56:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:20.335 01:56:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:20.335 01:56:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:20.335 01:56:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:20.335 01:56:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:20.335 01:56:05 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:21:20.335 01:56:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:20.335 01:56:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:20.335 01:56:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:20.335 01:56:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:20.335 01:56:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:20.335 01:56:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:20.335 01:56:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:20.335 01:56:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:20.335 01:56:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:20.335 01:56:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:20.335 01:56:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:20.335 01:56:05 -- common/autotest_common.sh@10 -- # set +x 00:21:22.239 01:56:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:22.239 01:56:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:22.239 01:56:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:22.239 01:56:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:22.239 01:56:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:22.239 01:56:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:22.239 01:56:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:22.239 01:56:07 -- nvmf/common.sh@294 -- # net_devs=() 00:21:22.239 01:56:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:22.239 01:56:07 -- nvmf/common.sh@295 -- # e810=() 00:21:22.239 01:56:07 -- nvmf/common.sh@295 -- # local -ga e810 00:21:22.239 01:56:07 -- nvmf/common.sh@296 -- # x722=() 00:21:22.239 01:56:07 -- nvmf/common.sh@296 -- # local -ga x722 00:21:22.239 01:56:07 -- nvmf/common.sh@297 -- # mlx=() 00:21:22.239 01:56:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:22.239 01:56:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:22.239 01:56:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:22.239 01:56:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:22.239 01:56:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:22.239 01:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.239 01:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:22.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:22.239 01:56:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:22.239 01:56:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:22.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:22.239 01:56:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:22.239 01:56:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:22.239 01:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.239 01:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.239 01:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.239 01:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.239 01:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:22.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:22.239 01:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.239 01:56:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:22.239 01:56:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:22.240 01:56:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:22.240 01:56:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:22.240 01:56:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:22.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:22.240 01:56:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:22.240 01:56:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:22.240 01:56:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:22.240 01:56:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:22.240 01:56:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:22.240 01:56:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:22.240 01:56:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:22.240 01:56:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:22.240 01:56:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:22.240 01:56:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:22.240 01:56:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:22.240 01:56:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:22.240 01:56:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:22.240 01:56:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:22.240 01:56:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:22.240 01:56:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:22.240 01:56:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:22.240 01:56:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:22.240 01:56:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:22.240 01:56:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:22.240 01:56:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:22.240 01:56:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:22.240 01:56:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:22.240 01:56:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:22.240 01:56:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:22.240 01:56:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:22.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:22.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:21:22.240 00:21:22.240 --- 10.0.0.2 ping statistics --- 00:21:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.240 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:21:22.240 01:56:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:22.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:22.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:21:22.240 00:21:22.240 --- 10.0.0.1 ping statistics --- 00:21:22.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:22.240 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:21:22.240 01:56:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:22.240 01:56:07 -- nvmf/common.sh@410 -- # return 0 00:21:22.240 01:56:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:22.240 01:56:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:22.240 01:56:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:22.240 01:56:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:22.240 01:56:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:22.240 01:56:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:22.240 01:56:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:22.240 01:56:07 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=2197634 00:21:22.240 01:56:07 -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:22.240 01:56:07 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:21:22.240 01:56:07 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 2197634 00:21:22.240 01:56:07 -- common/autotest_common.sh@819 -- # '[' -z 2197634 ']' 00:21:22.240 01:56:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.240 01:56:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:22.240 01:56:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.240 01:56:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:22.240 01:56:07 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 01:56:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:23.176 01:56:08 -- common/autotest_common.sh@852 -- # return 0 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.176 01:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.176 01:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 01:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:21:23.176 01:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.176 01:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 Malloc0 00:21:23.176 01:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:23.176 01:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.176 01:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 01:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:23.176 01:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.176 01:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 01:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:23.176 01:56:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:23.176 01:56:08 -- common/autotest_common.sh@10 -- # set +x 00:21:23.176 01:56:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:21:23.176 01:56:08 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:21:55.248 Fuzzing completed. Shutting down the fuzz application 00:21:55.248 00:21:55.248 Dumping successful admin opcodes: 00:21:55.248 8, 9, 10, 24, 00:21:55.248 Dumping successful io opcodes: 00:21:55.248 0, 9, 00:21:55.248 NS: 0x200003aeff00 I/O qp, Total commands completed: 439360, total successful commands: 2562, random_seed: 2882564672 00:21:55.248 NS: 0x200003aeff00 admin qp, Total commands completed: 54864, total successful commands: 439, random_seed: 3576890176 00:21:55.248 01:56:39 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:55.248 Fuzzing completed. Shutting down the fuzz application 00:21:55.248 00:21:55.248 Dumping successful admin opcodes: 00:21:55.248 24, 00:21:55.248 Dumping successful io opcodes: 00:21:55.248 00:21:55.248 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2468049315 00:21:55.249 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2468164869 00:21:55.249 01:56:40 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:55.249 01:56:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.249 01:56:40 -- common/autotest_common.sh@10 -- # set +x 00:21:55.249 01:56:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.249 01:56:40 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:55.249 01:56:40 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:55.249 01:56:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:55.249 01:56:40 -- nvmf/common.sh@116 -- # sync 00:21:55.249 01:56:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:55.249 01:56:40 -- nvmf/common.sh@119 -- # set +e 00:21:55.249 01:56:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:55.249 01:56:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:55.249 rmmod nvme_tcp 00:21:55.249 rmmod nvme_fabrics 00:21:55.249 rmmod nvme_keyring 00:21:55.249 01:56:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:55.249 01:56:40 -- nvmf/common.sh@123 -- # set -e 00:21:55.249 01:56:40 -- nvmf/common.sh@124 -- # return 0 00:21:55.249 01:56:40 -- nvmf/common.sh@477 -- # '[' -n 2197634 ']' 00:21:55.249 01:56:40 -- nvmf/common.sh@478 -- # killprocess 2197634 00:21:55.249 01:56:40 -- common/autotest_common.sh@926 -- # '[' -z 2197634 ']' 00:21:55.249 01:56:40 -- common/autotest_common.sh@930 -- # kill -0 2197634 00:21:55.249 01:56:40 -- common/autotest_common.sh@931 -- # uname 00:21:55.249 01:56:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:55.249 01:56:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2197634 00:21:55.249 01:56:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:55.249 01:56:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:55.249 01:56:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2197634' 00:21:55.249 killing process with pid 2197634 00:21:55.249 01:56:40 -- common/autotest_common.sh@945 -- # kill 2197634 00:21:55.249 01:56:40 -- common/autotest_common.sh@950 -- # wait 2197634 00:21:55.507 01:56:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:55.507 01:56:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:55.507 01:56:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:55.507 01:56:40 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:55.507 01:56:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:55.507 01:56:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:55.507 01:56:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:55.507 01:56:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.410 01:56:42 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:21:57.410 01:56:42 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:57.410 00:21:57.410 real 0m37.537s 00:21:57.410 user 0m51.169s 00:21:57.410 sys 0m15.506s 00:21:57.410 01:56:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:57.410 01:56:43 -- common/autotest_common.sh@10 -- # set +x 00:21:57.410 ************************************ 00:21:57.410 END TEST nvmf_fuzz 00:21:57.410 ************************************ 00:21:57.410 01:56:43 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:57.410 01:56:43 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:57.410 01:56:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:57.410 01:56:43 -- common/autotest_common.sh@10 -- # set +x 00:21:57.410 ************************************ 00:21:57.410 START TEST nvmf_multiconnection 00:21:57.410 ************************************ 00:21:57.410 01:56:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:21:57.668 * Looking for test storage... 00:21:57.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:57.668 01:56:43 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:57.668 01:56:43 -- nvmf/common.sh@7 -- # uname -s 00:21:57.668 01:56:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:57.668 01:56:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:57.668 01:56:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:57.668 01:56:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:57.668 01:56:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:57.668 01:56:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:57.668 01:56:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:57.668 01:56:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:57.668 01:56:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:57.668 01:56:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:57.668 01:56:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.668 01:56:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:57.668 01:56:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:57.668 01:56:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:57.668 01:56:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:57.668 01:56:43 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:57.668 01:56:43 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:57.668 01:56:43 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:57.668 01:56:43 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:57.668 01:56:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.668 01:56:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.668 01:56:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.668 01:56:43 -- paths/export.sh@5 -- # export PATH 00:21:57.668 01:56:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:57.668 01:56:43 -- nvmf/common.sh@46 -- # : 0 00:21:57.668 01:56:43 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:57.668 01:56:43 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:57.668 01:56:43 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:57.668 01:56:43 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:57.668 01:56:43 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:57.668 01:56:43 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:57.668 01:56:43 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:57.668 01:56:43 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:57.668 01:56:43 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:57.668 01:56:43 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:57.668 01:56:43 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:57.668 01:56:43 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:57.668 01:56:43 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:57.669 01:56:43 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:57.669 01:56:43 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:57.669 01:56:43 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:57.669 01:56:43 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:57.669 01:56:43 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:57.669 01:56:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:57.669 01:56:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:57.669 01:56:43 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:57.669 01:56:43 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:57.669 01:56:43 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:57.669 01:56:43 -- common/autotest_common.sh@10 -- # set +x 00:21:59.571 01:56:45 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:59.571 01:56:45 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:59.571 01:56:45 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:59.571 01:56:45 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:59.571 01:56:45 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:59.571 01:56:45 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:59.571 01:56:45 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:59.571 01:56:45 -- nvmf/common.sh@294 -- # net_devs=() 00:21:59.571 01:56:45 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:59.571 01:56:45 -- nvmf/common.sh@295 -- # e810=() 00:21:59.571 01:56:45 -- nvmf/common.sh@295 -- # local -ga e810 00:21:59.571 01:56:45 -- nvmf/common.sh@296 -- # x722=() 00:21:59.571 01:56:45 -- nvmf/common.sh@296 -- # local -ga x722 00:21:59.571 01:56:45 -- nvmf/common.sh@297 -- # mlx=() 00:21:59.571 01:56:45 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:59.571 01:56:45 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:59.571 01:56:45 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:59.571 01:56:45 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:21:59.571 01:56:45 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:21:59.571 01:56:45 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:21:59.571 01:56:45 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:21:59.571 01:56:45 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:59.571 01:56:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:59.571 01:56:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:59.571 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:59.571 01:56:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:59.571 01:56:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:59.572 01:56:45 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:59.572 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:59.572 01:56:45 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:59.572 01:56:45 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:59.572 01:56:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.572 01:56:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:59.572 01:56:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.572 01:56:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:59.572 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:59.572 01:56:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.572 01:56:45 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:59.572 01:56:45 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:59.572 01:56:45 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:59.572 01:56:45 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:59.572 01:56:45 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:59.572 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:59.572 01:56:45 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:59.572 01:56:45 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:59.572 01:56:45 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:59.572 01:56:45 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:21:59.572 01:56:45 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.572 01:56:45 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.572 01:56:45 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:59.572 01:56:45 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:21:59.572 01:56:45 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:59.572 01:56:45 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:59.572 01:56:45 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:21:59.572 01:56:45 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:59.572 01:56:45 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.572 01:56:45 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:21:59.572 01:56:45 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:21:59.572 01:56:45 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:21:59.572 01:56:45 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:59.572 01:56:45 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:59.572 01:56:45 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:59.572 01:56:45 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:21:59.572 01:56:45 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:59.572 01:56:45 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:59.572 01:56:45 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:59.572 01:56:45 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:21:59.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:59.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:59.572 00:21:59.572 --- 10.0.0.2 ping statistics --- 00:21:59.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.572 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:59.572 01:56:45 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:59.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:59.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:21:59.572 00:21:59.572 --- 10.0.0.1 ping statistics --- 00:21:59.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:59.572 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:21:59.572 01:56:45 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:59.572 01:56:45 -- nvmf/common.sh@410 -- # return 0 00:21:59.572 01:56:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:59.572 01:56:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:59.572 01:56:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:59.572 01:56:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:59.572 01:56:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:59.572 01:56:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:59.831 01:56:45 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:59.831 01:56:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:59.831 01:56:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:59.831 01:56:45 -- common/autotest_common.sh@10 -- # set +x 00:21:59.831 01:56:45 -- nvmf/common.sh@469 -- # nvmfpid=2203515 00:21:59.831 01:56:45 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:59.831 01:56:45 -- nvmf/common.sh@470 -- # waitforlisten 2203515 00:21:59.831 01:56:45 -- common/autotest_common.sh@819 -- # '[' -z 2203515 ']' 00:21:59.831 01:56:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:59.831 01:56:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:59.831 01:56:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:59.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:59.831 01:56:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:59.831 01:56:45 -- common/autotest_common.sh@10 -- # set +x 00:21:59.831 [2024-04-15 01:56:45.280712] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:21:59.831 [2024-04-15 01:56:45.280795] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:59.831 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.831 [2024-04-15 01:56:45.347269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:59.831 [2024-04-15 01:56:45.431653] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:59.831 [2024-04-15 01:56:45.431802] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:59.831 [2024-04-15 01:56:45.431820] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:59.831 [2024-04-15 01:56:45.431832] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:59.831 [2024-04-15 01:56:45.431883] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:59.831 [2024-04-15 01:56:45.431914] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.831 [2024-04-15 01:56:45.431974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:59.831 [2024-04-15 01:56:45.431976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:00.808 01:56:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:00.808 01:56:46 -- common/autotest_common.sh@852 -- # return 0 00:22:00.808 01:56:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:00.808 01:56:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.808 01:56:46 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 [2024-04-15 01:56:46.278716] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@21 -- # seq 1 11 00:22:00.808 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.808 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 Malloc1 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 [2024-04-15 01:56:46.335989] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.808 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 Malloc2 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.808 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 Malloc3 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.808 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.808 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.808 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:22:00.808 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.809 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:00.809 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:00.809 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:00.809 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:22:00.809 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:00.809 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 Malloc4 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.068 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 Malloc5 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.068 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 Malloc6 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.068 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 Malloc7 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.068 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.068 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.068 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.068 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:22:01.068 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.069 Malloc8 00:22:01.069 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.069 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:22:01.069 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.069 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.069 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:22:01.069 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.069 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.069 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:22:01.069 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.069 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.069 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.069 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:22:01.069 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.069 Malloc9 00:22:01.069 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.069 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:22:01.069 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.069 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.327 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 Malloc10 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.327 01:56:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 Malloc11 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.327 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.327 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.327 01:56:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:22:01.327 01:56:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.328 01:56:46 -- common/autotest_common.sh@10 -- # set +x 00:22:01.328 01:56:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.328 01:56:46 -- target/multiconnection.sh@28 -- # seq 1 11 00:22:01.328 01:56:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.328 01:56:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:22:01.897 01:56:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:22:01.897 01:56:47 -- common/autotest_common.sh@1177 -- # local i=0 00:22:01.897 01:56:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:01.897 01:56:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:01.897 01:56:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:04.437 01:56:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:04.437 01:56:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:04.437 01:56:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:22:04.437 01:56:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:04.437 01:56:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.437 01:56:49 -- common/autotest_common.sh@1187 -- # return 0 00:22:04.437 01:56:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:04.437 01:56:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:22:04.696 01:56:50 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:22:04.696 01:56:50 -- common/autotest_common.sh@1177 -- # local i=0 00:22:04.696 01:56:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:04.696 01:56:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:04.696 01:56:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:06.604 01:56:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:06.604 01:56:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:06.604 01:56:52 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:22:06.604 01:56:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:06.604 01:56:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:06.604 01:56:52 -- common/autotest_common.sh@1187 -- # return 0 00:22:06.604 01:56:52 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:06.604 01:56:52 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:22:07.547 01:56:52 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:22:07.547 01:56:52 -- common/autotest_common.sh@1177 -- # local i=0 00:22:07.547 01:56:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:07.547 01:56:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:07.547 01:56:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:09.446 01:56:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:09.446 01:56:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:09.446 01:56:54 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:22:09.446 01:56:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:09.446 01:56:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:09.446 01:56:54 -- common/autotest_common.sh@1187 -- # return 0 00:22:09.446 01:56:54 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:09.446 01:56:54 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:22:10.012 01:56:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:22:10.013 01:56:55 -- common/autotest_common.sh@1177 -- # local i=0 00:22:10.013 01:56:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:10.013 01:56:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:10.013 01:56:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:11.910 01:56:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:11.910 01:56:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:11.910 01:56:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:22:12.168 01:56:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:12.168 01:56:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:12.168 01:56:57 -- common/autotest_common.sh@1187 -- # return 0 00:22:12.168 01:56:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:12.168 01:56:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:22:12.734 01:56:58 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:22:12.734 01:56:58 -- common/autotest_common.sh@1177 -- # local i=0 00:22:12.734 01:56:58 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:12.734 01:56:58 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:12.734 01:56:58 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:15.260 01:57:00 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:15.260 01:57:00 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:15.260 01:57:00 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:22:15.260 01:57:00 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:15.260 01:57:00 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:15.260 01:57:00 -- common/autotest_common.sh@1187 -- # return 0 00:22:15.260 01:57:00 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:15.260 01:57:00 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:22:15.518 01:57:01 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:22:15.518 01:57:01 -- common/autotest_common.sh@1177 -- # local i=0 00:22:15.518 01:57:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:15.518 01:57:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:15.518 01:57:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:18.039 01:57:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:18.039 01:57:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:18.039 01:57:03 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:22:18.039 01:57:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:18.039 01:57:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:18.039 01:57:03 -- common/autotest_common.sh@1187 -- # return 0 00:22:18.039 01:57:03 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:18.039 01:57:03 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:22:18.654 01:57:03 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:22:18.654 01:57:03 -- common/autotest_common.sh@1177 -- # local i=0 00:22:18.654 01:57:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:18.654 01:57:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:18.654 01:57:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:20.554 01:57:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:20.554 01:57:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:20.554 01:57:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:22:20.554 01:57:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:20.554 01:57:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:20.554 01:57:06 -- common/autotest_common.sh@1187 -- # return 0 00:22:20.554 01:57:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:20.554 01:57:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:22:21.120 01:57:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:22:21.120 01:57:06 -- common/autotest_common.sh@1177 -- # local i=0 00:22:21.120 01:57:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:21.120 01:57:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:21.120 01:57:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:23.649 01:57:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:23.649 01:57:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:23.649 01:57:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:22:23.649 01:57:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:23.649 01:57:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:23.649 01:57:08 -- common/autotest_common.sh@1187 -- # return 0 00:22:23.649 01:57:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:23.649 01:57:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:22:23.907 01:57:09 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:22:23.907 01:57:09 -- common/autotest_common.sh@1177 -- # local i=0 00:22:23.907 01:57:09 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:23.907 01:57:09 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:23.907 01:57:09 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:26.438 01:57:11 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:26.438 01:57:11 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:26.438 01:57:11 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:22:26.438 01:57:11 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:26.438 01:57:11 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:26.438 01:57:11 -- common/autotest_common.sh@1187 -- # return 0 00:22:26.438 01:57:11 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:26.438 01:57:11 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:22:27.005 01:57:12 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:22:27.005 01:57:12 -- common/autotest_common.sh@1177 -- # local i=0 00:22:27.005 01:57:12 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:27.005 01:57:12 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:27.005 01:57:12 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:28.901 01:57:14 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:28.901 01:57:14 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:28.901 01:57:14 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:22:28.901 01:57:14 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:28.901 01:57:14 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:28.901 01:57:14 -- common/autotest_common.sh@1187 -- # return 0 00:22:28.901 01:57:14 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.901 01:57:14 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:22:29.835 01:57:15 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:29.835 01:57:15 -- common/autotest_common.sh@1177 -- # local i=0 00:22:29.835 01:57:15 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:22:29.835 01:57:15 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:22:29.835 01:57:15 -- common/autotest_common.sh@1184 -- # sleep 2 00:22:31.732 01:57:17 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:22:31.732 01:57:17 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:22:31.732 01:57:17 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:22:31.732 01:57:17 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:22:31.732 01:57:17 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:22:31.732 01:57:17 -- common/autotest_common.sh@1187 -- # return 0 00:22:31.732 01:57:17 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:31.732 [global] 00:22:31.732 thread=1 00:22:31.732 invalidate=1 00:22:31.732 rw=read 00:22:31.732 time_based=1 00:22:31.732 runtime=10 00:22:31.732 ioengine=libaio 00:22:31.732 direct=1 00:22:31.732 bs=262144 00:22:31.732 iodepth=64 00:22:31.732 norandommap=1 00:22:31.732 numjobs=1 00:22:31.732 00:22:31.732 [job0] 00:22:31.732 filename=/dev/nvme0n1 00:22:31.732 [job1] 00:22:31.732 filename=/dev/nvme10n1 00:22:31.732 [job2] 00:22:31.732 filename=/dev/nvme1n1 00:22:31.732 [job3] 00:22:31.732 filename=/dev/nvme2n1 00:22:31.732 [job4] 00:22:31.732 filename=/dev/nvme3n1 00:22:31.732 [job5] 00:22:31.732 filename=/dev/nvme4n1 00:22:31.732 [job6] 00:22:31.732 filename=/dev/nvme5n1 00:22:31.732 [job7] 00:22:31.732 filename=/dev/nvme6n1 00:22:31.732 [job8] 00:22:31.732 filename=/dev/nvme7n1 00:22:31.732 [job9] 00:22:31.732 filename=/dev/nvme8n1 00:22:31.732 [job10] 00:22:31.732 filename=/dev/nvme9n1 00:22:31.732 Could not set queue depth (nvme0n1) 00:22:31.732 Could not set queue depth (nvme10n1) 00:22:31.732 Could not set queue depth (nvme1n1) 00:22:31.732 Could not set queue depth (nvme2n1) 00:22:31.732 Could not set queue depth (nvme3n1) 00:22:31.732 Could not set queue depth (nvme4n1) 00:22:31.732 Could not set queue depth (nvme5n1) 00:22:31.732 Could not set queue depth (nvme6n1) 00:22:31.732 Could not set queue depth (nvme7n1) 00:22:31.732 Could not set queue depth (nvme8n1) 00:22:31.732 Could not set queue depth (nvme9n1) 00:22:31.990 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:31.990 fio-3.35 00:22:31.990 Starting 11 threads 00:22:44.243 00:22:44.243 job0: (groupid=0, jobs=1): err= 0: pid=2208573: Mon Apr 15 01:57:28 2024 00:22:44.243 read: IOPS=347, BW=86.9MiB/s (91.1MB/s)(883MiB/10161msec) 00:22:44.243 slat (usec): min=14, max=1107.8k, avg=2767.16, stdev=20536.20 00:22:44.243 clat (msec): min=52, max=1542, avg=181.22, stdev=184.70 00:22:44.243 lat (msec): min=57, max=1542, avg=183.98, stdev=186.70 00:22:44.243 clat percentiles (msec): 00:22:44.243 | 1.00th=[ 69], 5.00th=[ 85], 10.00th=[ 94], 20.00th=[ 108], 00:22:44.243 | 30.00th=[ 121], 40.00th=[ 134], 50.00th=[ 146], 60.00th=[ 161], 00:22:44.243 | 70.00th=[ 180], 80.00th=[ 201], 90.00th=[ 245], 95.00th=[ 334], 00:22:44.243 | 99.00th=[ 1502], 99.50th=[ 1519], 99.90th=[ 1536], 99.95th=[ 1536], 00:22:44.243 | 99.99th=[ 1536] 00:22:44.243 bw ( KiB/s): min= 6656, max=161469, per=6.18%, avg=93433.74, stdev=39962.81, samples=19 00:22:44.243 iops : min= 26, max= 630, avg=364.89, stdev=156.06, samples=19 00:22:44.243 lat (msec) : 100=13.70%, 250=76.90%, 500=7.62%, 2000=1.78% 00:22:44.243 cpu : usr=0.31%, sys=1.26%, ctx=819, majf=0, minf=4097 00:22:44.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:22:44.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.243 issued rwts: total=3532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.243 job1: (groupid=0, jobs=1): err= 0: pid=2208604: Mon Apr 15 01:57:28 2024 00:22:44.243 read: IOPS=715, BW=179MiB/s (188MB/s)(1819MiB/10170msec) 00:22:44.243 slat (usec): min=9, max=369849, avg=680.07, stdev=5575.14 00:22:44.243 clat (msec): min=3, max=1382, avg=88.68, stdev=92.56 00:22:44.243 lat (msec): min=3, max=1382, avg=89.36, stdev=92.99 00:22:44.243 clat percentiles (msec): 00:22:44.243 | 1.00th=[ 11], 5.00th=[ 25], 10.00th=[ 36], 20.00th=[ 43], 00:22:44.243 | 30.00th=[ 52], 40.00th=[ 64], 50.00th=[ 74], 60.00th=[ 84], 00:22:44.243 | 70.00th=[ 95], 80.00th=[ 113], 90.00th=[ 140], 95.00th=[ 180], 00:22:44.243 | 99.00th=[ 439], 99.50th=[ 447], 99.90th=[ 1385], 99.95th=[ 1385], 00:22:44.243 | 99.99th=[ 1385] 00:22:44.243 bw ( KiB/s): min=63488, max=294912, per=12.21%, avg=184621.25, stdev=63301.31, samples=20 00:22:44.243 iops : min= 248, max= 1152, avg=721.10, stdev=247.31, samples=20 00:22:44.243 lat (msec) : 4=0.10%, 10=0.76%, 20=2.68%, 50=25.16%, 100=44.33% 00:22:44.243 lat (msec) : 250=23.99%, 500=2.65%, 750=0.04%, 2000=0.29% 00:22:44.243 cpu : usr=0.35%, sys=2.06%, ctx=2318, majf=0, minf=3721 00:22:44.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:22:44.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.243 issued rwts: total=7277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.243 job2: (groupid=0, jobs=1): err= 0: pid=2208645: Mon Apr 15 01:57:28 2024 00:22:44.243 read: IOPS=477, BW=119MiB/s (125MB/s)(1212MiB/10150msec) 00:22:44.243 slat (usec): min=9, max=1019.5k, avg=1974.18, stdev=16123.38 00:22:44.243 clat (msec): min=7, max=1531, avg=131.96, stdev=168.43 00:22:44.243 lat (msec): min=8, max=1546, avg=133.93, stdev=170.30 00:22:44.243 clat percentiles (msec): 00:22:44.243 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 55], 00:22:44.243 | 30.00th=[ 66], 40.00th=[ 77], 50.00th=[ 103], 60.00th=[ 120], 00:22:44.243 | 70.00th=[ 138], 80.00th=[ 161], 90.00th=[ 209], 95.00th=[ 309], 00:22:44.243 | 99.00th=[ 1452], 99.50th=[ 1485], 99.90th=[ 1519], 99.95th=[ 1519], 00:22:44.243 | 99.99th=[ 1536] 00:22:44.243 bw ( KiB/s): min= 7168, max=297984, per=8.52%, avg=128881.26, stdev=84611.63, samples=19 00:22:44.243 iops : min= 28, max= 1164, avg=503.37, stdev=330.51, samples=19 00:22:44.243 lat (msec) : 10=0.02%, 20=0.12%, 50=13.95%, 100=34.76%, 250=43.82% 00:22:44.243 lat (msec) : 500=6.02%, 2000=1.30% 00:22:44.243 cpu : usr=0.34%, sys=1.66%, ctx=1068, majf=0, minf=4097 00:22:44.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:44.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.243 issued rwts: total=4847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.243 job3: (groupid=0, jobs=1): err= 0: pid=2208656: Mon Apr 15 01:57:28 2024 00:22:44.243 read: IOPS=577, BW=144MiB/s (151MB/s)(1463MiB/10126msec) 00:22:44.243 slat (usec): min=10, max=133966, avg=1608.89, stdev=5230.78 00:22:44.243 clat (msec): min=4, max=399, avg=109.05, stdev=51.87 00:22:44.243 lat (msec): min=4, max=408, avg=110.66, stdev=52.64 00:22:44.243 clat percentiles (msec): 00:22:44.243 | 1.00th=[ 27], 5.00th=[ 42], 10.00th=[ 58], 20.00th=[ 71], 00:22:44.243 | 30.00th=[ 79], 40.00th=[ 87], 50.00th=[ 97], 60.00th=[ 112], 00:22:44.243 | 70.00th=[ 128], 80.00th=[ 148], 90.00th=[ 171], 95.00th=[ 211], 00:22:44.243 | 99.00th=[ 264], 99.50th=[ 292], 99.90th=[ 397], 99.95th=[ 397], 00:22:44.243 | 99.99th=[ 401] 00:22:44.243 bw ( KiB/s): min=63488, max=230400, per=9.80%, avg=148198.20, stdev=54022.56, samples=20 00:22:44.243 iops : min= 248, max= 900, avg=578.80, stdev=211.07, samples=20 00:22:44.243 lat (msec) : 10=0.26%, 20=0.17%, 50=7.37%, 100=44.45%, 250=45.95% 00:22:44.243 lat (msec) : 500=1.81% 00:22:44.243 cpu : usr=0.44%, sys=2.00%, ctx=1268, majf=0, minf=4097 00:22:44.243 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:22:44.243 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.243 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.243 issued rwts: total=5852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.243 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.243 job4: (groupid=0, jobs=1): err= 0: pid=2208660: Mon Apr 15 01:57:28 2024 00:22:44.243 read: IOPS=455, BW=114MiB/s (119MB/s)(1153MiB/10134msec) 00:22:44.243 slat (usec): min=9, max=1127.8k, avg=1059.42, stdev=18032.58 00:22:44.243 clat (msec): min=2, max=1544, avg=139.44, stdev=181.23 00:22:44.243 lat (msec): min=3, max=1544, avg=140.50, stdev=183.04 00:22:44.243 clat percentiles (msec): 00:22:44.243 | 1.00th=[ 7], 5.00th=[ 14], 10.00th=[ 20], 20.00th=[ 35], 00:22:44.243 | 30.00th=[ 60], 40.00th=[ 93], 50.00th=[ 114], 60.00th=[ 136], 00:22:44.243 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 257], 95.00th=[ 321], 00:22:44.243 | 99.00th=[ 1485], 99.50th=[ 1502], 99.90th=[ 1502], 99.95th=[ 1519], 00:22:44.243 | 99.99th=[ 1552] 00:22:44.243 bw ( KiB/s): min=11264, max=354304, per=8.10%, avg=122562.26, stdev=78724.56, samples=19 00:22:44.243 iops : min= 44, max= 1384, avg=478.68, stdev=307.53, samples=19 00:22:44.243 lat (msec) : 4=0.26%, 10=1.86%, 20=8.22%, 50=17.56%, 100=15.00% 00:22:44.243 lat (msec) : 250=46.00%, 500=9.71%, 750=0.02%, 2000=1.37% 00:22:44.244 cpu : usr=0.16%, sys=1.38%, ctx=1589, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=4613,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job5: (groupid=0, jobs=1): err= 0: pid=2208667: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=646, BW=162MiB/s (170MB/s)(1632MiB/10088msec) 00:22:44.244 slat (usec): min=10, max=251825, avg=649.04, stdev=4377.42 00:22:44.244 clat (usec): min=1584, max=1547.5k, avg=98208.05, stdev=86370.77 00:22:44.244 lat (usec): min=1628, max=1556.7k, avg=98857.10, stdev=86467.66 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 45], 20.00th=[ 53], 00:22:44.244 | 30.00th=[ 69], 40.00th=[ 74], 50.00th=[ 81], 60.00th=[ 95], 00:22:44.244 | 70.00th=[ 106], 80.00th=[ 128], 90.00th=[ 159], 95.00th=[ 188], 00:22:44.244 | 99.00th=[ 347], 99.50th=[ 430], 99.90th=[ 1552], 99.95th=[ 1552], 00:22:44.244 | 99.99th=[ 1552] 00:22:44.244 bw ( KiB/s): min=63488, max=278016, per=10.94%, avg=165410.15, stdev=56216.67, samples=20 00:22:44.244 iops : min= 248, max= 1086, avg=646.05, stdev=219.56, samples=20 00:22:44.244 lat (msec) : 2=0.05%, 4=0.09%, 10=0.49%, 20=0.74%, 50=16.41% 00:22:44.244 lat (msec) : 100=47.21%, 250=32.38%, 500=2.42%, 2000=0.21% 00:22:44.244 cpu : usr=0.33%, sys=2.24%, ctx=2100, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=6526,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job6: (groupid=0, jobs=1): err= 0: pid=2208668: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=372, BW=93.1MiB/s (97.6MB/s)(946MiB/10159msec) 00:22:44.244 slat (usec): min=9, max=853192, avg=1427.65, stdev=17574.06 00:22:44.244 clat (usec): min=1521, max=1517.0k, avg=170358.37, stdev=197560.19 00:22:44.244 lat (usec): min=1540, max=1581.7k, avg=171786.03, stdev=199465.62 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 34], 20.00th=[ 58], 00:22:44.244 | 30.00th=[ 87], 40.00th=[ 112], 50.00th=[ 132], 60.00th=[ 153], 00:22:44.244 | 70.00th=[ 176], 80.00th=[ 222], 90.00th=[ 313], 95.00th=[ 418], 00:22:44.244 | 99.00th=[ 1469], 99.50th=[ 1485], 99.90th=[ 1519], 99.95th=[ 1519], 00:22:44.244 | 99.99th=[ 1519] 00:22:44.244 bw ( KiB/s): min= 4096, max=174592, per=6.62%, avg=100181.47, stdev=47783.25, samples=19 00:22:44.244 iops : min= 16, max= 682, avg=391.26, stdev=186.61, samples=19 00:22:44.244 lat (msec) : 2=0.05%, 4=0.13%, 10=1.64%, 20=4.07%, 50=11.24% 00:22:44.244 lat (msec) : 100=17.98%, 250=49.87%, 500=12.61%, 750=0.74%, 2000=1.67% 00:22:44.244 cpu : usr=0.24%, sys=1.09%, ctx=1281, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=3782,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job7: (groupid=0, jobs=1): err= 0: pid=2208669: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=633, BW=158MiB/s (166MB/s)(1599MiB/10095msec) 00:22:44.244 slat (usec): min=9, max=108052, avg=1006.88, stdev=4430.18 00:22:44.244 clat (msec): min=3, max=480, avg=99.92, stdev=63.48 00:22:44.244 lat (msec): min=3, max=482, avg=100.93, stdev=63.98 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 9], 5.00th=[ 22], 10.00th=[ 36], 20.00th=[ 53], 00:22:44.244 | 30.00th=[ 66], 40.00th=[ 74], 50.00th=[ 86], 60.00th=[ 104], 00:22:44.244 | 70.00th=[ 118], 80.00th=[ 138], 90.00th=[ 163], 95.00th=[ 228], 00:22:44.244 | 99.00th=[ 326], 99.50th=[ 363], 99.90th=[ 477], 99.95th=[ 477], 00:22:44.244 | 99.99th=[ 481] 00:22:44.244 bw ( KiB/s): min=67072, max=257539, per=10.72%, avg=162147.25, stdev=49750.31, samples=20 00:22:44.244 iops : min= 262, max= 1006, avg=633.30, stdev=194.37, samples=20 00:22:44.244 lat (msec) : 4=0.22%, 10=0.89%, 20=2.58%, 50=15.04%, 100=39.91% 00:22:44.244 lat (msec) : 250=37.46%, 500=3.91% 00:22:44.244 cpu : usr=0.32%, sys=1.92%, ctx=1701, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=6397,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job8: (groupid=0, jobs=1): err= 0: pid=2208670: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=536, BW=134MiB/s (141MB/s)(1354MiB/10092msec) 00:22:44.244 slat (usec): min=9, max=326595, avg=932.03, stdev=7157.72 00:22:44.244 clat (msec): min=2, max=1441, avg=118.29, stdev=156.57 00:22:44.244 lat (msec): min=2, max=1441, avg=119.22, stdev=156.84 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 8], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 38], 00:22:44.244 | 30.00th=[ 66], 40.00th=[ 81], 50.00th=[ 94], 60.00th=[ 109], 00:22:44.244 | 70.00th=[ 126], 80.00th=[ 153], 90.00th=[ 192], 95.00th=[ 271], 00:22:44.244 | 99.00th=[ 1401], 99.50th=[ 1435], 99.90th=[ 1435], 99.95th=[ 1435], 00:22:44.244 | 99.99th=[ 1435] 00:22:44.244 bw ( KiB/s): min=33280, max=247808, per=9.05%, avg=136959.65, stdev=46268.60, samples=20 00:22:44.244 iops : min= 130, max= 968, avg=534.95, stdev=180.74, samples=20 00:22:44.244 lat (msec) : 4=0.20%, 10=3.25%, 20=12.49%, 50=6.34%, 100=31.46% 00:22:44.244 lat (msec) : 250=39.01%, 500=6.21%, 2000=1.05% 00:22:44.244 cpu : usr=0.37%, sys=1.66%, ctx=1872, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=5414,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job9: (groupid=0, jobs=1): err= 0: pid=2208671: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=611, BW=153MiB/s (160MB/s)(1549MiB/10133msec) 00:22:44.244 slat (usec): min=10, max=273392, avg=1111.86, stdev=5709.13 00:22:44.244 clat (msec): min=3, max=1494, avg=103.48, stdev=121.16 00:22:44.244 lat (msec): min=3, max=1498, avg=104.59, stdev=121.43 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 9], 5.00th=[ 23], 10.00th=[ 35], 20.00th=[ 53], 00:22:44.244 | 30.00th=[ 60], 40.00th=[ 69], 50.00th=[ 79], 60.00th=[ 88], 00:22:44.244 | 70.00th=[ 99], 80.00th=[ 136], 90.00th=[ 178], 95.00th=[ 224], 00:22:44.244 | 99.00th=[ 726], 99.50th=[ 751], 99.90th=[ 1485], 99.95th=[ 1502], 00:22:44.244 | 99.99th=[ 1502] 00:22:44.244 bw ( KiB/s): min= 8704, max=280064, per=10.38%, avg=157012.15, stdev=81444.11, samples=20 00:22:44.244 iops : min= 34, max= 1094, avg=613.25, stdev=318.11, samples=20 00:22:44.244 lat (msec) : 4=0.10%, 10=2.13%, 20=1.78%, 50=14.75%, 100=52.74% 00:22:44.244 lat (msec) : 250=24.35%, 500=2.52%, 750=1.15%, 1000=0.05%, 2000=0.44% 00:22:44.244 cpu : usr=0.37%, sys=2.05%, ctx=1618, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=6196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 job10: (groupid=0, jobs=1): err= 0: pid=2208672: Mon Apr 15 01:57:28 2024 00:22:44.244 read: IOPS=560, BW=140MiB/s (147MB/s)(1414MiB/10093msec) 00:22:44.244 slat (usec): min=9, max=347064, avg=1391.63, stdev=7924.63 00:22:44.244 clat (msec): min=3, max=553, avg=112.71, stdev=84.69 00:22:44.244 lat (msec): min=3, max=727, avg=114.10, stdev=85.46 00:22:44.244 clat percentiles (msec): 00:22:44.244 | 1.00th=[ 5], 5.00th=[ 9], 10.00th=[ 40], 20.00th=[ 66], 00:22:44.244 | 30.00th=[ 78], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 108], 00:22:44.244 | 70.00th=[ 121], 80.00th=[ 138], 90.00th=[ 190], 95.00th=[ 262], 00:22:44.244 | 99.00th=[ 502], 99.50th=[ 527], 99.90th=[ 542], 99.95th=[ 558], 00:22:44.244 | 99.99th=[ 558] 00:22:44.244 bw ( KiB/s): min=52736, max=215040, per=9.47%, avg=143192.60, stdev=46073.13, samples=20 00:22:44.244 iops : min= 206, max= 840, avg=559.30, stdev=179.97, samples=20 00:22:44.244 lat (msec) : 4=0.83%, 10=4.56%, 20=1.01%, 50=6.65%, 100=40.36% 00:22:44.244 lat (msec) : 250=40.82%, 500=4.67%, 750=1.11% 00:22:44.244 cpu : usr=0.43%, sys=2.01%, ctx=1471, majf=0, minf=4097 00:22:44.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:22:44.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:44.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:44.244 issued rwts: total=5657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:44.244 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:44.244 00:22:44.244 Run status group 0 (all jobs): 00:22:44.244 READ: bw=1477MiB/s (1549MB/s), 86.9MiB/s-179MiB/s (91.1MB/s-188MB/s), io=14.7GiB (15.8GB), run=10088-10170msec 00:22:44.244 00:22:44.244 Disk stats (read/write): 00:22:44.244 nvme0n1: ios=6870/0, merge=0/0, ticks=1217732/0, in_queue=1217732, util=97.05% 00:22:44.244 nvme10n1: ios=14172/0, merge=0/0, ticks=1241456/0, in_queue=1241456, util=97.27% 00:22:44.244 nvme1n1: ios=9540/0, merge=0/0, ticks=1220381/0, in_queue=1220381, util=97.59% 00:22:44.244 nvme2n1: ios=11529/0, merge=0/0, ticks=1224153/0, in_queue=1224153, util=97.76% 00:22:44.244 nvme3n1: ios=9058/0, merge=0/0, ticks=1240185/0, in_queue=1240185, util=97.84% 00:22:44.244 nvme4n1: ios=12840/0, merge=0/0, ticks=1237868/0, in_queue=1237868, util=98.21% 00:22:44.244 nvme5n1: ios=7407/0, merge=0/0, ticks=1238461/0, in_queue=1238461, util=98.36% 00:22:44.244 nvme6n1: ios=12591/0, merge=0/0, ticks=1227272/0, in_queue=1227272, util=98.49% 00:22:44.244 nvme7n1: ios=10640/0, merge=0/0, ticks=1238204/0, in_queue=1238204, util=98.89% 00:22:44.244 nvme8n1: ios=12137/0, merge=0/0, ticks=1237887/0, in_queue=1237887, util=99.09% 00:22:44.244 nvme9n1: ios=11095/0, merge=0/0, ticks=1228241/0, in_queue=1228241, util=99.23% 00:22:44.244 01:57:28 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:44.244 [global] 00:22:44.244 thread=1 00:22:44.244 invalidate=1 00:22:44.244 rw=randwrite 00:22:44.244 time_based=1 00:22:44.244 runtime=10 00:22:44.245 ioengine=libaio 00:22:44.245 direct=1 00:22:44.245 bs=262144 00:22:44.245 iodepth=64 00:22:44.245 norandommap=1 00:22:44.245 numjobs=1 00:22:44.245 00:22:44.245 [job0] 00:22:44.245 filename=/dev/nvme0n1 00:22:44.245 [job1] 00:22:44.245 filename=/dev/nvme10n1 00:22:44.245 [job2] 00:22:44.245 filename=/dev/nvme1n1 00:22:44.245 [job3] 00:22:44.245 filename=/dev/nvme2n1 00:22:44.245 [job4] 00:22:44.245 filename=/dev/nvme3n1 00:22:44.245 [job5] 00:22:44.245 filename=/dev/nvme4n1 00:22:44.245 [job6] 00:22:44.245 filename=/dev/nvme5n1 00:22:44.245 [job7] 00:22:44.245 filename=/dev/nvme6n1 00:22:44.245 [job8] 00:22:44.245 filename=/dev/nvme7n1 00:22:44.245 [job9] 00:22:44.245 filename=/dev/nvme8n1 00:22:44.245 [job10] 00:22:44.245 filename=/dev/nvme9n1 00:22:44.245 Could not set queue depth (nvme0n1) 00:22:44.245 Could not set queue depth (nvme10n1) 00:22:44.245 Could not set queue depth (nvme1n1) 00:22:44.245 Could not set queue depth (nvme2n1) 00:22:44.245 Could not set queue depth (nvme3n1) 00:22:44.245 Could not set queue depth (nvme4n1) 00:22:44.245 Could not set queue depth (nvme5n1) 00:22:44.245 Could not set queue depth (nvme6n1) 00:22:44.245 Could not set queue depth (nvme7n1) 00:22:44.245 Could not set queue depth (nvme8n1) 00:22:44.245 Could not set queue depth (nvme9n1) 00:22:44.245 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:44.245 fio-3.35 00:22:44.245 Starting 11 threads 00:22:54.260 00:22:54.260 job0: (groupid=0, jobs=1): err= 0: pid=2209561: Mon Apr 15 01:57:39 2024 00:22:54.260 write: IOPS=145, BW=36.4MiB/s (38.1MB/s)(377MiB/10367msec); 0 zone resets 00:22:54.260 slat (usec): min=21, max=113462, avg=6473.03, stdev=13021.07 00:22:54.260 clat (msec): min=3, max=837, avg=433.12, stdev=154.93 00:22:54.260 lat (msec): min=5, max=837, avg=439.60, stdev=156.66 00:22:54.260 clat percentiles (msec): 00:22:54.260 | 1.00th=[ 12], 5.00th=[ 22], 10.00th=[ 251], 20.00th=[ 363], 00:22:54.261 | 30.00th=[ 393], 40.00th=[ 430], 50.00th=[ 456], 60.00th=[ 498], 00:22:54.261 | 70.00th=[ 523], 80.00th=[ 542], 90.00th=[ 567], 95.00th=[ 625], 00:22:54.261 | 99.00th=[ 760], 99.50th=[ 802], 99.90th=[ 835], 99.95th=[ 835], 00:22:54.261 | 99.99th=[ 835] 00:22:54.261 bw ( KiB/s): min=26624, max=62976, per=4.83%, avg=36992.00, stdev=9627.31, samples=20 00:22:54.261 iops : min= 104, max= 246, avg=144.50, stdev=37.61, samples=20 00:22:54.261 lat (msec) : 4=0.07%, 10=0.60%, 20=3.51%, 50=2.85%, 100=0.53% 00:22:54.261 lat (msec) : 250=2.45%, 500=51.72%, 750=37.07%, 1000=1.19% 00:22:54.261 cpu : usr=0.52%, sys=0.39%, ctx=523, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.1%, 32=2.1%, >=64=95.8% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,1508,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job1: (groupid=0, jobs=1): err= 0: pid=2209562: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=234, BW=58.7MiB/s (61.6MB/s)(601MiB/10229msec); 0 zone resets 00:22:54.261 slat (usec): min=18, max=2523.8k, avg=3639.26, stdev=51892.15 00:22:54.261 clat (msec): min=3, max=2728, avg=268.64, stdev=411.52 00:22:54.261 lat (msec): min=6, max=2728, avg=272.28, stdev=414.38 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 14], 5.00th=[ 40], 10.00th=[ 85], 20.00th=[ 128], 00:22:54.261 | 30.00th=[ 163], 40.00th=[ 182], 50.00th=[ 213], 60.00th=[ 239], 00:22:54.261 | 70.00th=[ 255], 80.00th=[ 271], 90.00th=[ 305], 95.00th=[ 426], 00:22:54.261 | 99.00th=[ 2702], 99.50th=[ 2735], 99.90th=[ 2735], 99.95th=[ 2735], 00:22:54.261 | 99.99th=[ 2735] 00:22:54.261 bw ( KiB/s): min=39936, max=131584, per=9.77%, avg=74848.00, stdev=25050.16, samples=16 00:22:54.261 iops : min= 156, max= 514, avg=292.38, stdev=97.85, samples=16 00:22:54.261 lat (msec) : 4=0.04%, 10=0.37%, 20=1.29%, 50=5.20%, 100=7.32% 00:22:54.261 lat (msec) : 250=51.56%, 500=29.96%, 750=1.62%, >=2000=2.62% 00:22:54.261 cpu : usr=0.67%, sys=0.59%, ctx=1028, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,2403,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job2: (groupid=0, jobs=1): err= 0: pid=2209563: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=280, BW=70.2MiB/s (73.6MB/s)(711MiB/10123msec); 0 zone resets 00:22:54.261 slat (usec): min=23, max=85662, avg=2877.39, stdev=7593.04 00:22:54.261 clat (msec): min=9, max=630, avg=224.86, stdev=149.83 00:22:54.261 lat (msec): min=14, max=630, avg=227.74, stdev=151.91 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 41], 5.00th=[ 95], 10.00th=[ 116], 20.00th=[ 130], 00:22:54.261 | 30.00th=[ 136], 40.00th=[ 142], 50.00th=[ 159], 60.00th=[ 180], 00:22:54.261 | 70.00th=[ 220], 80.00th=[ 300], 90.00th=[ 527], 95.00th=[ 558], 00:22:54.261 | 99.00th=[ 617], 99.50th=[ 625], 99.90th=[ 634], 99.95th=[ 634], 00:22:54.261 | 99.99th=[ 634] 00:22:54.261 bw ( KiB/s): min=24576, max=126976, per=9.29%, avg=71142.40, stdev=37440.17, samples=20 00:22:54.261 iops : min= 96, max= 496, avg=277.90, stdev=146.25, samples=20 00:22:54.261 lat (msec) : 10=0.04%, 20=0.11%, 50=1.37%, 100=4.93%, 250=67.80% 00:22:54.261 lat (msec) : 500=11.65%, 750=14.11% 00:22:54.261 cpu : usr=0.75%, sys=0.99%, ctx=1162, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,2842,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job3: (groupid=0, jobs=1): err= 0: pid=2209564: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=357, BW=89.4MiB/s (93.7MB/s)(908MiB/10155msec); 0 zone resets 00:22:54.261 slat (usec): min=17, max=1563.7k, avg=2240.88, stdev=27009.19 00:22:54.261 clat (msec): min=3, max=1662, avg=176.71, stdev=213.30 00:22:54.261 lat (msec): min=5, max=1666, avg=178.95, stdev=214.94 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 24], 5.00th=[ 47], 10.00th=[ 67], 20.00th=[ 100], 00:22:54.261 | 30.00th=[ 104], 40.00th=[ 111], 50.00th=[ 126], 60.00th=[ 146], 00:22:54.261 | 70.00th=[ 184], 80.00th=[ 207], 90.00th=[ 275], 95.00th=[ 330], 00:22:54.261 | 99.00th=[ 1653], 99.50th=[ 1653], 99.90th=[ 1653], 99.95th=[ 1670], 00:22:54.261 | 99.99th=[ 1670] 00:22:54.261 bw ( KiB/s): min=33792, max=197120, per=13.24%, avg=101461.33, stdev=44017.38, samples=18 00:22:54.261 iops : min= 132, max= 770, avg=396.33, stdev=171.94, samples=18 00:22:54.261 lat (msec) : 4=0.03%, 10=0.17%, 20=0.55%, 50=5.07%, 100=15.76% 00:22:54.261 lat (msec) : 250=64.82%, 500=11.68%, 750=0.19%, 2000=1.74% 00:22:54.261 cpu : usr=1.01%, sys=0.84%, ctx=1857, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,3630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job4: (groupid=0, jobs=1): err= 0: pid=2209565: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=232, BW=58.1MiB/s (60.9MB/s)(608MiB/10454msec); 0 zone resets 00:22:54.261 slat (usec): min=16, max=3338.1k, avg=2735.97, stdev=68068.74 00:22:54.261 clat (msec): min=16, max=3635, avg=272.33, stdev=553.80 00:22:54.261 lat (msec): min=16, max=3646, avg=275.07, stdev=557.48 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 34], 5.00th=[ 59], 10.00th=[ 80], 20.00th=[ 117], 00:22:54.261 | 30.00th=[ 132], 40.00th=[ 150], 50.00th=[ 161], 60.00th=[ 178], 00:22:54.261 | 70.00th=[ 201], 80.00th=[ 234], 90.00th=[ 313], 95.00th=[ 518], 00:22:54.261 | 99.00th=[ 3608], 99.50th=[ 3608], 99.90th=[ 3641], 99.95th=[ 3641], 00:22:54.261 | 99.99th=[ 3641] 00:22:54.261 bw ( KiB/s): min=33792, max=134144, per=11.30%, avg=86558.86, stdev=28960.80, samples=14 00:22:54.261 iops : min= 132, max= 524, avg=338.07, stdev=113.21, samples=14 00:22:54.261 lat (msec) : 20=0.21%, 50=2.92%, 100=12.02%, 250=67.65%, 500=12.14% 00:22:54.261 lat (msec) : 750=1.52%, 1000=0.95%, >=2000=2.59% 00:22:54.261 cpu : usr=0.64%, sys=0.72%, ctx=1637, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,2430,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job5: (groupid=0, jobs=1): err= 0: pid=2209569: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=139, BW=34.8MiB/s (36.5MB/s)(361MiB/10378msec); 0 zone resets 00:22:54.261 slat (usec): min=26, max=67208, avg=6832.16, stdev=12979.94 00:22:54.261 clat (msec): min=31, max=843, avg=452.18, stdev=124.21 00:22:54.261 lat (msec): min=31, max=843, avg=459.01, stdev=125.23 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 75], 5.00th=[ 247], 10.00th=[ 292], 20.00th=[ 359], 00:22:54.261 | 30.00th=[ 388], 40.00th=[ 435], 50.00th=[ 472], 60.00th=[ 510], 00:22:54.261 | 70.00th=[ 535], 80.00th=[ 550], 90.00th=[ 575], 95.00th=[ 600], 00:22:54.261 | 99.00th=[ 768], 99.50th=[ 810], 99.90th=[ 827], 99.95th=[ 844], 00:22:54.261 | 99.99th=[ 844] 00:22:54.261 bw ( KiB/s): min=26624, max=49152, per=4.61%, avg=35353.60, stdev=6872.16, samples=20 00:22:54.261 iops : min= 104, max= 192, avg=138.10, stdev=26.84, samples=20 00:22:54.261 lat (msec) : 50=0.55%, 100=1.11%, 250=3.53%, 500=51.00%, 750=42.63% 00:22:54.261 lat (msec) : 1000=1.18% 00:22:54.261 cpu : usr=0.42%, sys=0.38%, ctx=426, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.2%, >=64=95.6% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,1445,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job6: (groupid=0, jobs=1): err= 0: pid=2209578: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=322, BW=80.7MiB/s (84.6MB/s)(818MiB/10135msec); 0 zone resets 00:22:54.261 slat (usec): min=24, max=190668, avg=2942.08, stdev=7948.50 00:22:54.261 clat (msec): min=5, max=400, avg=195.13, stdev=94.91 00:22:54.261 lat (msec): min=5, max=400, avg=198.07, stdev=96.20 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 36], 5.00th=[ 70], 10.00th=[ 90], 20.00th=[ 103], 00:22:54.261 | 30.00th=[ 112], 40.00th=[ 126], 50.00th=[ 188], 60.00th=[ 234], 00:22:54.261 | 70.00th=[ 262], 80.00th=[ 296], 90.00th=[ 330], 95.00th=[ 347], 00:22:54.261 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 384], 99.95th=[ 401], 00:22:54.261 | 99.99th=[ 401] 00:22:54.261 bw ( KiB/s): min=47104, max=164352, per=10.72%, avg=82150.40, stdev=37303.94, samples=20 00:22:54.261 iops : min= 184, max= 642, avg=320.90, stdev=145.72, samples=20 00:22:54.261 lat (msec) : 10=0.06%, 20=0.28%, 50=1.25%, 100=16.87%, 250=47.83% 00:22:54.261 lat (msec) : 500=33.71% 00:22:54.261 cpu : usr=0.95%, sys=0.94%, ctx=1024, majf=0, minf=1 00:22:54.261 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:22:54.261 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.261 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.261 issued rwts: total=0,3272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.261 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.261 job7: (groupid=0, jobs=1): err= 0: pid=2209579: Mon Apr 15 01:57:39 2024 00:22:54.261 write: IOPS=302, BW=75.7MiB/s (79.4MB/s)(772MiB/10192msec); 0 zone resets 00:22:54.261 slat (usec): min=16, max=1741.1k, avg=1805.39, stdev=35141.31 00:22:54.261 clat (msec): min=5, max=1975, avg=209.31, stdev=310.58 00:22:54.261 lat (msec): min=6, max=1975, avg=211.12, stdev=312.91 00:22:54.261 clat percentiles (msec): 00:22:54.261 | 1.00th=[ 24], 5.00th=[ 48], 10.00th=[ 66], 20.00th=[ 88], 00:22:54.261 | 30.00th=[ 97], 40.00th=[ 108], 50.00th=[ 122], 60.00th=[ 146], 00:22:54.261 | 70.00th=[ 197], 80.00th=[ 226], 90.00th=[ 317], 95.00th=[ 447], 00:22:54.261 | 99.00th=[ 1938], 99.50th=[ 1955], 99.90th=[ 1972], 99.95th=[ 1972], 00:22:54.261 | 99.99th=[ 1972] 00:22:54.261 bw ( KiB/s): min= 4096, max=188416, per=12.64%, avg=96800.00, stdev=47056.66, samples=16 00:22:54.261 iops : min= 16, max= 736, avg=378.12, stdev=183.82, samples=16 00:22:54.261 lat (msec) : 10=0.10%, 20=0.68%, 50=4.60%, 100=27.98%, 250=54.27% 00:22:54.262 lat (msec) : 500=8.03%, 750=0.26%, 1000=0.26%, 2000=3.82% 00:22:54.262 cpu : usr=0.94%, sys=0.82%, ctx=2279, majf=0, minf=1 00:22:54.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:22:54.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.262 issued rwts: total=0,3088,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.262 job8: (groupid=0, jobs=1): err= 0: pid=2209580: Mon Apr 15 01:57:39 2024 00:22:54.262 write: IOPS=500, BW=125MiB/s (131MB/s)(1270MiB/10158msec); 0 zone resets 00:22:54.262 slat (usec): min=17, max=88562, avg=1602.07, stdev=4345.58 00:22:54.262 clat (msec): min=4, max=514, avg=126.34, stdev=72.85 00:22:54.262 lat (msec): min=4, max=515, avg=127.94, stdev=73.39 00:22:54.262 clat percentiles (msec): 00:22:54.262 | 1.00th=[ 18], 5.00th=[ 46], 10.00th=[ 68], 20.00th=[ 80], 00:22:54.262 | 30.00th=[ 83], 40.00th=[ 85], 50.00th=[ 96], 60.00th=[ 122], 00:22:54.262 | 70.00th=[ 150], 80.00th=[ 176], 90.00th=[ 226], 95.00th=[ 264], 00:22:54.262 | 99.00th=[ 368], 99.50th=[ 456], 99.90th=[ 506], 99.95th=[ 510], 00:22:54.262 | 99.99th=[ 514] 00:22:54.262 bw ( KiB/s): min=60928, max=201728, per=16.76%, avg=128384.00, stdev=45715.03, samples=20 00:22:54.262 iops : min= 238, max= 788, avg=501.50, stdev=178.57, samples=20 00:22:54.262 lat (msec) : 10=0.10%, 20=1.34%, 50=4.17%, 100=46.13%, 250=41.76% 00:22:54.262 lat (msec) : 500=6.30%, 750=0.20% 00:22:54.262 cpu : usr=1.29%, sys=1.59%, ctx=2034, majf=0, minf=1 00:22:54.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:22:54.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.262 issued rwts: total=0,5079,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.262 job9: (groupid=0, jobs=1): err= 0: pid=2209581: Mon Apr 15 01:57:39 2024 00:22:54.262 write: IOPS=149, BW=37.3MiB/s (39.1MB/s)(388MiB/10381msec); 0 zone resets 00:22:54.262 slat (usec): min=25, max=304833, avg=6029.68, stdev=15052.76 00:22:54.262 clat (msec): min=28, max=786, avg=422.25, stdev=189.59 00:22:54.262 lat (msec): min=28, max=786, avg=428.28, stdev=192.12 00:22:54.262 clat percentiles (msec): 00:22:54.262 | 1.00th=[ 40], 5.00th=[ 57], 10.00th=[ 82], 20.00th=[ 142], 00:22:54.262 | 30.00th=[ 409], 40.00th=[ 451], 50.00th=[ 481], 60.00th=[ 510], 00:22:54.262 | 70.00th=[ 542], 80.00th=[ 567], 90.00th=[ 617], 95.00th=[ 659], 00:22:54.262 | 99.00th=[ 743], 99.50th=[ 751], 99.90th=[ 760], 99.95th=[ 785], 00:22:54.262 | 99.99th=[ 785] 00:22:54.262 bw ( KiB/s): min=19968, max=128512, per=4.97%, avg=38041.60, stdev=23278.06, samples=20 00:22:54.262 iops : min= 78, max= 502, avg=148.60, stdev=90.93, samples=20 00:22:54.262 lat (msec) : 50=4.06%, 100=7.16%, 250=10.13%, 500=35.87%, 750=42.45% 00:22:54.262 lat (msec) : 1000=0.32% 00:22:54.262 cpu : usr=0.45%, sys=0.45%, ctx=680, majf=0, minf=1 00:22:54.262 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.5%, 16=1.0%, 32=2.1%, >=64=95.9% 00:22:54.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.262 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.262 issued rwts: total=0,1550,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.262 job10: (groupid=0, jobs=1): err= 0: pid=2209582: Mon Apr 15 01:57:39 2024 00:22:54.262 write: IOPS=397, BW=99.5MiB/s (104MB/s)(1009MiB/10148msec); 0 zone resets 00:22:54.262 slat (usec): min=18, max=87358, avg=2204.02, stdev=5428.15 00:22:54.262 clat (msec): min=2, max=417, avg=158.58, stdev=97.24 00:22:54.262 lat (msec): min=2, max=417, avg=160.79, stdev=98.60 00:22:54.262 clat percentiles (msec): 00:22:54.262 | 1.00th=[ 18], 5.00th=[ 62], 10.00th=[ 85], 20.00th=[ 90], 00:22:54.262 | 30.00th=[ 92], 40.00th=[ 94], 50.00th=[ 97], 60.00th=[ 142], 00:22:54.262 | 70.00th=[ 207], 80.00th=[ 271], 90.00th=[ 326], 95.00th=[ 347], 00:22:54.262 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 393], 99.95th=[ 414], 00:22:54.262 | 99.99th=[ 418] 00:22:54.262 bw ( KiB/s): min=45056, max=199168, per=13.28%, avg=101708.80, stdev=54757.24, samples=20 00:22:54.262 iops : min= 176, max= 778, avg=397.30, stdev=213.90, samples=20 00:22:54.262 lat (msec) : 4=0.15%, 10=0.30%, 20=0.92%, 50=2.63%, 100=49.69% 00:22:54.262 lat (msec) : 250=24.00%, 500=22.32% 00:22:54.262 cpu : usr=1.02%, sys=1.34%, ctx=1539, majf=0, minf=1 00:22:54.262 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:22:54.262 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:54.262 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:54.262 issued rwts: total=0,4037,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:54.262 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:54.262 00:22:54.262 Run status group 0 (all jobs): 00:22:54.262 WRITE: bw=748MiB/s (784MB/s), 34.8MiB/s-125MiB/s (36.5MB/s-131MB/s), io=7821MiB (8201MB), run=10123-10454msec 00:22:54.262 00:22:54.262 Disk stats (read/write): 00:22:54.262 nvme0n1: ios=49/2950, merge=0/0, ticks=398/1219505, in_queue=1219903, util=98.30% 00:22:54.262 nvme10n1: ios=49/4733, merge=0/0, ticks=229/1232028, in_queue=1232257, util=98.53% 00:22:54.262 nvme1n1: ios=54/5496, merge=0/0, ticks=950/1210805, in_queue=1211755, util=99.47% 00:22:54.262 nvme2n1: ios=49/7090, merge=0/0, ticks=140/1212748, in_queue=1212888, util=98.70% 00:22:54.262 nvme3n1: ios=49/4814, merge=0/0, ticks=102/1267157, in_queue=1267259, util=98.22% 00:22:54.262 nvme4n1: ios=43/2819, merge=0/0, ticks=1879/1219865, in_queue=1221744, util=100.00% 00:22:54.262 nvme5n1: ios=52/6359, merge=0/0, ticks=1068/1200861, in_queue=1201929, util=100.00% 00:22:54.262 nvme6n1: ios=20/6161, merge=0/0, ticks=80/1258620, in_queue=1258700, util=98.51% 00:22:54.262 nvme7n1: ios=0/9979, merge=0/0, ticks=0/1210187, in_queue=1210187, util=98.77% 00:22:54.262 nvme8n1: ios=0/3022, merge=0/0, ticks=0/1221485, in_queue=1221485, util=98.97% 00:22:54.262 nvme9n1: ios=0/7886, merge=0/0, ticks=0/1211952, in_queue=1211952, util=99.10% 00:22:54.262 01:57:39 -- target/multiconnection.sh@36 -- # sync 00:22:54.262 01:57:39 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:54.262 01:57:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.262 01:57:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:54.262 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:54.262 01:57:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:54.262 01:57:39 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.262 01:57:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.262 01:57:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:22:54.262 01:57:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.262 01:57:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:22:54.262 01:57:39 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.262 01:57:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:54.262 01:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.262 01:57:39 -- common/autotest_common.sh@10 -- # set +x 00:22:54.262 01:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.262 01:57:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.262 01:57:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:54.262 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:54.262 01:57:39 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:54.262 01:57:39 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.262 01:57:39 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.262 01:57:39 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:22:54.262 01:57:39 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.262 01:57:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:22:54.262 01:57:39 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.262 01:57:39 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:54.262 01:57:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.262 01:57:39 -- common/autotest_common.sh@10 -- # set +x 00:22:54.262 01:57:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.262 01:57:39 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.262 01:57:39 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:54.521 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:54.522 01:57:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:54.522 01:57:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:54.522 01:57:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:54.522 01:57:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:22:54.522 01:57:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:54.522 01:57:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:22:54.522 01:57:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:54.522 01:57:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:54.522 01:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.522 01:57:40 -- common/autotest_common.sh@10 -- # set +x 00:22:54.522 01:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.522 01:57:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:54.522 01:57:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:55.090 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:55.090 01:57:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:55.090 01:57:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.090 01:57:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.090 01:57:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:22:55.090 01:57:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.090 01:57:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:22:55.090 01:57:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.090 01:57:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:55.090 01:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.090 01:57:40 -- common/autotest_common.sh@10 -- # set +x 00:22:55.090 01:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.090 01:57:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.090 01:57:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:55.351 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:55.351 01:57:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:55.351 01:57:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.351 01:57:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.351 01:57:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:22:55.351 01:57:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.351 01:57:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:22:55.351 01:57:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.351 01:57:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:55.351 01:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.351 01:57:40 -- common/autotest_common.sh@10 -- # set +x 00:22:55.351 01:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.351 01:57:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.351 01:57:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:55.351 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:55.351 01:57:40 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:55.351 01:57:40 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.351 01:57:40 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.351 01:57:40 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:22:55.351 01:57:40 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.351 01:57:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:22:55.351 01:57:40 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.351 01:57:40 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:55.351 01:57:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.351 01:57:40 -- common/autotest_common.sh@10 -- # set +x 00:22:55.351 01:57:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.351 01:57:40 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.351 01:57:40 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:55.611 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:55.611 01:57:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:55.611 01:57:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.611 01:57:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.611 01:57:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:22:55.611 01:57:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.611 01:57:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:22:55.611 01:57:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.611 01:57:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:55.611 01:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.611 01:57:41 -- common/autotest_common.sh@10 -- # set +x 00:22:55.611 01:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.611 01:57:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.611 01:57:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:55.871 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:55.871 01:57:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:55.871 01:57:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:55.871 01:57:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:55.871 01:57:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:22:55.871 01:57:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:55.871 01:57:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:22:55.871 01:57:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:55.871 01:57:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:55.871 01:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.871 01:57:41 -- common/autotest_common.sh@10 -- # set +x 00:22:55.871 01:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.871 01:57:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:55.871 01:57:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:56.129 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:56.129 01:57:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:56.129 01:57:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:56.129 01:57:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:56.129 01:57:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:22:56.129 01:57:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:56.129 01:57:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:22:56.129 01:57:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:56.129 01:57:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:56.129 01:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.129 01:57:41 -- common/autotest_common.sh@10 -- # set +x 00:22:56.129 01:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.129 01:57:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.129 01:57:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:56.387 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:56.387 01:57:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:56.387 01:57:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:56.387 01:57:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:56.387 01:57:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:22:56.387 01:57:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:56.387 01:57:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:22:56.387 01:57:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:56.387 01:57:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:56.387 01:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.387 01:57:41 -- common/autotest_common.sh@10 -- # set +x 00:22:56.387 01:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.387 01:57:41 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:56.387 01:57:41 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:56.387 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:56.388 01:57:41 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:56.388 01:57:41 -- common/autotest_common.sh@1198 -- # local i=0 00:22:56.388 01:57:41 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:22:56.388 01:57:41 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:22:56.388 01:57:41 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:22:56.388 01:57:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:22:56.388 01:57:41 -- common/autotest_common.sh@1210 -- # return 0 00:22:56.388 01:57:41 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:56.388 01:57:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:56.388 01:57:41 -- common/autotest_common.sh@10 -- # set +x 00:22:56.388 01:57:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:56.388 01:57:41 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:56.388 01:57:41 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:56.388 01:57:41 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:56.388 01:57:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:56.388 01:57:41 -- nvmf/common.sh@116 -- # sync 00:22:56.388 01:57:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:56.388 01:57:41 -- nvmf/common.sh@119 -- # set +e 00:22:56.388 01:57:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:56.388 01:57:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:56.388 rmmod nvme_tcp 00:22:56.388 rmmod nvme_fabrics 00:22:56.388 rmmod nvme_keyring 00:22:56.388 01:57:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:56.388 01:57:42 -- nvmf/common.sh@123 -- # set -e 00:22:56.388 01:57:42 -- nvmf/common.sh@124 -- # return 0 00:22:56.388 01:57:42 -- nvmf/common.sh@477 -- # '[' -n 2203515 ']' 00:22:56.388 01:57:42 -- nvmf/common.sh@478 -- # killprocess 2203515 00:22:56.388 01:57:42 -- common/autotest_common.sh@926 -- # '[' -z 2203515 ']' 00:22:56.388 01:57:42 -- common/autotest_common.sh@930 -- # kill -0 2203515 00:22:56.388 01:57:42 -- common/autotest_common.sh@931 -- # uname 00:22:56.388 01:57:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:56.388 01:57:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2203515 00:22:56.647 01:57:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:56.647 01:57:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:56.647 01:57:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2203515' 00:22:56.647 killing process with pid 2203515 00:22:56.647 01:57:42 -- common/autotest_common.sh@945 -- # kill 2203515 00:22:56.647 01:57:42 -- common/autotest_common.sh@950 -- # wait 2203515 00:22:57.214 01:57:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:57.214 01:57:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:57.214 01:57:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:57.214 01:57:42 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:57.214 01:57:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:57.214 01:57:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.214 01:57:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:57.214 01:57:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.121 01:57:44 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:22:59.121 00:22:59.121 real 1m1.562s 00:22:59.121 user 3m28.886s 00:22:59.121 sys 0m18.908s 00:22:59.121 01:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:59.121 01:57:44 -- common/autotest_common.sh@10 -- # set +x 00:22:59.121 ************************************ 00:22:59.121 END TEST nvmf_multiconnection 00:22:59.121 ************************************ 00:22:59.121 01:57:44 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:59.121 01:57:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:59.121 01:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:59.121 01:57:44 -- common/autotest_common.sh@10 -- # set +x 00:22:59.121 ************************************ 00:22:59.121 START TEST nvmf_initiator_timeout 00:22:59.121 ************************************ 00:22:59.121 01:57:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:22:59.121 * Looking for test storage... 00:22:59.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:59.121 01:57:44 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:59.121 01:57:44 -- nvmf/common.sh@7 -- # uname -s 00:22:59.121 01:57:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:59.121 01:57:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:59.121 01:57:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:59.121 01:57:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:59.121 01:57:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:59.121 01:57:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:59.121 01:57:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:59.121 01:57:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:59.121 01:57:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:59.121 01:57:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:59.121 01:57:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.121 01:57:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:59.121 01:57:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:59.121 01:57:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:59.121 01:57:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:59.121 01:57:44 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:59.121 01:57:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:59.121 01:57:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:59.121 01:57:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:59.121 01:57:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.121 01:57:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.121 01:57:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.121 01:57:44 -- paths/export.sh@5 -- # export PATH 00:22:59.121 01:57:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:59.121 01:57:44 -- nvmf/common.sh@46 -- # : 0 00:22:59.121 01:57:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:59.121 01:57:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:59.121 01:57:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:59.121 01:57:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:59.121 01:57:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:59.121 01:57:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:59.121 01:57:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:59.121 01:57:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:59.121 01:57:44 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:59.121 01:57:44 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:59.121 01:57:44 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:59.121 01:57:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:59.121 01:57:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:59.121 01:57:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:59.121 01:57:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:59.121 01:57:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:59.121 01:57:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.121 01:57:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:59.121 01:57:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:59.121 01:57:44 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:59.121 01:57:44 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:59.121 01:57:44 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:59.121 01:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:01.025 01:57:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:01.025 01:57:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:01.025 01:57:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:01.025 01:57:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:01.025 01:57:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:01.025 01:57:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:01.025 01:57:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:01.025 01:57:46 -- nvmf/common.sh@294 -- # net_devs=() 00:23:01.025 01:57:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:01.025 01:57:46 -- nvmf/common.sh@295 -- # e810=() 00:23:01.025 01:57:46 -- nvmf/common.sh@295 -- # local -ga e810 00:23:01.025 01:57:46 -- nvmf/common.sh@296 -- # x722=() 00:23:01.025 01:57:46 -- nvmf/common.sh@296 -- # local -ga x722 00:23:01.025 01:57:46 -- nvmf/common.sh@297 -- # mlx=() 00:23:01.025 01:57:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:01.025 01:57:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:01.025 01:57:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:01.025 01:57:46 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:23:01.025 01:57:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.025 01:57:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:01.025 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:01.025 01:57:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:01.025 01:57:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:01.025 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:01.025 01:57:46 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.025 01:57:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.025 01:57:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.025 01:57:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:01.025 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:01.025 01:57:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.025 01:57:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:01.025 01:57:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:01.025 01:57:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:01.025 01:57:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:01.025 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:01.025 01:57:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:01.025 01:57:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:01.025 01:57:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:23:01.025 01:57:46 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:23:01.025 01:57:46 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:01.025 01:57:46 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:01.025 01:57:46 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:01.025 01:57:46 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:23:01.025 01:57:46 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:01.025 01:57:46 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:01.025 01:57:46 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:23:01.025 01:57:46 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:01.025 01:57:46 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:01.025 01:57:46 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:23:01.025 01:57:46 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:23:01.025 01:57:46 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:23:01.025 01:57:46 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:01.025 01:57:46 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:01.025 01:57:46 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:01.025 01:57:46 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:23:01.025 01:57:46 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:01.025 01:57:46 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:01.025 01:57:46 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:01.025 01:57:46 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:23:01.025 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:01.025 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:23:01.025 00:23:01.025 --- 10.0.0.2 ping statistics --- 00:23:01.025 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.025 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:23:01.025 01:57:46 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:01.025 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:01.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:23:01.026 00:23:01.026 --- 10.0.0.1 ping statistics --- 00:23:01.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:01.026 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:23:01.026 01:57:46 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:01.026 01:57:46 -- nvmf/common.sh@410 -- # return 0 00:23:01.026 01:57:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:01.026 01:57:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:01.026 01:57:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:01.026 01:57:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:01.026 01:57:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:01.026 01:57:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:01.026 01:57:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:01.283 01:57:46 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:23:01.283 01:57:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:01.283 01:57:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:01.283 01:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:01.283 01:57:46 -- nvmf/common.sh@469 -- # nvmfpid=2212965 00:23:01.283 01:57:46 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:01.283 01:57:46 -- nvmf/common.sh@470 -- # waitforlisten 2212965 00:23:01.283 01:57:46 -- common/autotest_common.sh@819 -- # '[' -z 2212965 ']' 00:23:01.283 01:57:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:01.283 01:57:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:01.283 01:57:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:01.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:01.283 01:57:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:01.283 01:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:01.283 [2024-04-15 01:57:46.723623] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:23:01.283 [2024-04-15 01:57:46.723694] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:01.283 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.283 [2024-04-15 01:57:46.787687] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:01.283 [2024-04-15 01:57:46.870881] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:01.283 [2024-04-15 01:57:46.871053] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:01.283 [2024-04-15 01:57:46.871071] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:01.283 [2024-04-15 01:57:46.871084] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:01.283 [2024-04-15 01:57:46.871142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:01.283 [2024-04-15 01:57:46.871210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.283 [2024-04-15 01:57:46.871277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:01.283 [2024-04-15 01:57:46.871279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.219 01:57:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:02.219 01:57:47 -- common/autotest_common.sh@852 -- # return 0 00:23:02.219 01:57:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:02.219 01:57:47 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 01:57:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 Malloc0 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 Delay0 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 [2024-04-15 01:57:47.724616] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:02.219 01:57:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.219 01:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:02.219 [2024-04-15 01:57:47.752857] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:02.219 01:57:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.219 01:57:47 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:03.156 01:57:48 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:23:03.156 01:57:48 -- common/autotest_common.sh@1177 -- # local i=0 00:23:03.156 01:57:48 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:23:03.156 01:57:48 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:23:03.156 01:57:48 -- common/autotest_common.sh@1184 -- # sleep 2 00:23:05.087 01:57:50 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:23:05.087 01:57:50 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:23:05.088 01:57:50 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:23:05.088 01:57:50 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:23:05.088 01:57:50 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:23:05.088 01:57:50 -- common/autotest_common.sh@1187 -- # return 0 00:23:05.088 01:57:50 -- target/initiator_timeout.sh@35 -- # fio_pid=2213413 00:23:05.088 01:57:50 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:23:05.088 01:57:50 -- target/initiator_timeout.sh@37 -- # sleep 3 00:23:05.088 [global] 00:23:05.088 thread=1 00:23:05.088 invalidate=1 00:23:05.088 rw=write 00:23:05.088 time_based=1 00:23:05.088 runtime=60 00:23:05.088 ioengine=libaio 00:23:05.088 direct=1 00:23:05.088 bs=4096 00:23:05.088 iodepth=1 00:23:05.088 norandommap=0 00:23:05.088 numjobs=1 00:23:05.088 00:23:05.088 verify_dump=1 00:23:05.088 verify_backlog=512 00:23:05.088 verify_state_save=0 00:23:05.088 do_verify=1 00:23:05.088 verify=crc32c-intel 00:23:05.088 [job0] 00:23:05.088 filename=/dev/nvme0n1 00:23:05.088 Could not set queue depth (nvme0n1) 00:23:05.088 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:23:05.088 fio-3.35 00:23:05.088 Starting 1 thread 00:23:08.377 01:57:53 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:23:08.377 01:57:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.377 01:57:53 -- common/autotest_common.sh@10 -- # set +x 00:23:08.377 true 00:23:08.377 01:57:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.377 01:57:53 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:23:08.377 01:57:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.377 01:57:53 -- common/autotest_common.sh@10 -- # set +x 00:23:08.377 true 00:23:08.377 01:57:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.377 01:57:53 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:23:08.377 01:57:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.377 01:57:53 -- common/autotest_common.sh@10 -- # set +x 00:23:08.377 true 00:23:08.377 01:57:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.377 01:57:53 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:23:08.377 01:57:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:08.377 01:57:53 -- common/autotest_common.sh@10 -- # set +x 00:23:08.377 true 00:23:08.377 01:57:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:08.377 01:57:53 -- target/initiator_timeout.sh@45 -- # sleep 3 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:23:10.909 01:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.909 01:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:10.909 true 00:23:10.909 01:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:23:10.909 01:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.909 01:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:10.909 true 00:23:10.909 01:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:23:10.909 01:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.909 01:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:10.909 true 00:23:10.909 01:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:23:10.909 01:57:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.909 01:57:56 -- common/autotest_common.sh@10 -- # set +x 00:23:10.909 true 00:23:10.909 01:57:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:23:10.909 01:57:56 -- target/initiator_timeout.sh@54 -- # wait 2213413 00:24:07.205 00:24:07.205 job0: (groupid=0, jobs=1): err= 0: pid=2213605: Mon Apr 15 01:58:50 2024 00:24:07.205 read: IOPS=7, BW=30.0KiB/s (30.7kB/s)(1800KiB/60019msec) 00:24:07.205 slat (usec): min=8, max=6702, avg=41.31, stdev=314.94 00:24:07.205 clat (usec): min=690, max=41333k, avg=132758.82, stdev=1946517.99 00:24:07.205 lat (usec): min=715, max=41333k, avg=132800.13, stdev=1946517.28 00:24:07.205 clat percentiles (msec): 00:24:07.205 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:24:07.205 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:24:07.205 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 42], 95.00th=[ 43], 00:24:07.205 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:24:07.205 | 99.99th=[17113] 00:24:07.205 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60019msec); 0 zone resets 00:24:07.205 slat (nsec): min=8001, max=73561, avg=22550.75, stdev=12593.75 00:24:07.205 clat (usec): min=367, max=645, avg=469.02, stdev=35.51 00:24:07.205 lat (usec): min=380, max=684, avg=491.57, stdev=41.30 00:24:07.205 clat percentiles (usec): 00:24:07.205 | 1.00th=[ 392], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 445], 00:24:07.205 | 30.00th=[ 453], 40.00th=[ 461], 50.00th=[ 465], 60.00th=[ 469], 00:24:07.205 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 510], 95.00th=[ 519], 00:24:07.205 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 644], 99.95th=[ 644], 00:24:07.205 | 99.99th=[ 644] 00:24:07.205 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:24:07.205 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:24:07.205 lat (usec) : 500=43.04%, 750=10.29% 00:24:07.205 lat (msec) : 50=46.57%, >=2000=0.10% 00:24:07.205 cpu : usr=0.01%, sys=0.07%, ctx=963, majf=0, minf=2 00:24:07.205 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:07.205 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.205 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:07.205 issued rwts: total=450,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:07.205 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:07.205 00:24:07.205 Run status group 0 (all jobs): 00:24:07.205 READ: bw=30.0KiB/s (30.7kB/s), 30.0KiB/s-30.0KiB/s (30.7kB/s-30.7kB/s), io=1800KiB (1843kB), run=60019-60019msec 00:24:07.205 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60019-60019msec 00:24:07.205 00:24:07.205 Disk stats (read/write): 00:24:07.205 nvme0n1: ios=546/512, merge=0/0, ticks=19604/244, in_queue=19848, util=99.69% 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:07.205 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:07.205 01:58:50 -- common/autotest_common.sh@1198 -- # local i=0 00:24:07.205 01:58:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:24:07.205 01:58:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:07.205 01:58:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:24:07.205 01:58:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:07.205 01:58:50 -- common/autotest_common.sh@1210 -- # return 0 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:24:07.205 nvmf hotplug test: fio successful as expected 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.205 01:58:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:07.205 01:58:50 -- common/autotest_common.sh@10 -- # set +x 00:24:07.205 01:58:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:24:07.205 01:58:50 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:24:07.205 01:58:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:07.205 01:58:50 -- nvmf/common.sh@116 -- # sync 00:24:07.205 01:58:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:07.205 01:58:50 -- nvmf/common.sh@119 -- # set +e 00:24:07.205 01:58:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:07.205 01:58:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:07.205 rmmod nvme_tcp 00:24:07.205 rmmod nvme_fabrics 00:24:07.205 rmmod nvme_keyring 00:24:07.205 01:58:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:07.205 01:58:51 -- nvmf/common.sh@123 -- # set -e 00:24:07.205 01:58:51 -- nvmf/common.sh@124 -- # return 0 00:24:07.205 01:58:51 -- nvmf/common.sh@477 -- # '[' -n 2212965 ']' 00:24:07.205 01:58:51 -- nvmf/common.sh@478 -- # killprocess 2212965 00:24:07.205 01:58:51 -- common/autotest_common.sh@926 -- # '[' -z 2212965 ']' 00:24:07.205 01:58:51 -- common/autotest_common.sh@930 -- # kill -0 2212965 00:24:07.205 01:58:51 -- common/autotest_common.sh@931 -- # uname 00:24:07.205 01:58:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:07.205 01:58:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2212965 00:24:07.205 01:58:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:07.205 01:58:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:07.205 01:58:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2212965' 00:24:07.205 killing process with pid 2212965 00:24:07.205 01:58:51 -- common/autotest_common.sh@945 -- # kill 2212965 00:24:07.205 01:58:51 -- common/autotest_common.sh@950 -- # wait 2212965 00:24:07.205 01:58:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:07.205 01:58:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:07.205 01:58:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:07.205 01:58:51 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:07.205 01:58:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:07.205 01:58:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.205 01:58:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.205 01:58:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.773 01:58:53 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:07.773 00:24:07.773 real 1m8.724s 00:24:07.773 user 4m13.898s 00:24:07.773 sys 0m6.394s 00:24:07.773 01:58:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.773 01:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:07.773 ************************************ 00:24:07.773 END TEST nvmf_initiator_timeout 00:24:07.773 ************************************ 00:24:07.773 01:58:53 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:24:07.773 01:58:53 -- nvmf/nvmf.sh@70 -- # '[' tcp = tcp ']' 00:24:07.773 01:58:53 -- nvmf/nvmf.sh@71 -- # gather_supported_nvmf_pci_devs 00:24:07.773 01:58:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:07.773 01:58:53 -- common/autotest_common.sh@10 -- # set +x 00:24:09.682 01:58:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:09.682 01:58:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:09.682 01:58:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:09.682 01:58:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:09.682 01:58:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:09.682 01:58:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:09.682 01:58:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:09.682 01:58:55 -- nvmf/common.sh@294 -- # net_devs=() 00:24:09.682 01:58:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:09.682 01:58:55 -- nvmf/common.sh@295 -- # e810=() 00:24:09.682 01:58:55 -- nvmf/common.sh@295 -- # local -ga e810 00:24:09.682 01:58:55 -- nvmf/common.sh@296 -- # x722=() 00:24:09.682 01:58:55 -- nvmf/common.sh@296 -- # local -ga x722 00:24:09.682 01:58:55 -- nvmf/common.sh@297 -- # mlx=() 00:24:09.682 01:58:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:09.682 01:58:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.682 01:58:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:09.682 01:58:55 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:09.682 01:58:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:09.682 01:58:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.682 01:58:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.682 01:58:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:09.682 01:58:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.682 01:58:55 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:09.682 01:58:55 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:09.683 01:58:55 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.683 01:58:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.683 01:58:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.683 01:58:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.683 01:58:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.683 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.683 01:58:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.683 01:58:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:09.683 01:58:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.683 01:58:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:09.683 01:58:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.683 01:58:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.683 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.683 01:58:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.683 01:58:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:09.683 01:58:55 -- nvmf/nvmf.sh@72 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.683 01:58:55 -- nvmf/nvmf.sh@73 -- # (( 2 > 0 )) 00:24:09.683 01:58:55 -- nvmf/nvmf.sh@74 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:09.683 01:58:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:09.683 01:58:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:09.683 01:58:55 -- common/autotest_common.sh@10 -- # set +x 00:24:09.683 ************************************ 00:24:09.683 START TEST nvmf_perf_adq 00:24:09.683 ************************************ 00:24:09.683 01:58:55 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:24:09.683 * Looking for test storage... 00:24:09.683 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:09.683 01:58:55 -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.683 01:58:55 -- nvmf/common.sh@7 -- # uname -s 00:24:09.683 01:58:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.683 01:58:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.683 01:58:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.683 01:58:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.683 01:58:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:09.683 01:58:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:09.683 01:58:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.683 01:58:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:09.683 01:58:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.683 01:58:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:09.683 01:58:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.683 01:58:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:09.683 01:58:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.683 01:58:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:09.683 01:58:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:09.683 01:58:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.683 01:58:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.683 01:58:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.683 01:58:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.683 01:58:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.683 01:58:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.683 01:58:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.683 01:58:55 -- paths/export.sh@5 -- # export PATH 00:24:09.683 01:58:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.683 01:58:55 -- nvmf/common.sh@46 -- # : 0 00:24:09.683 01:58:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:09.683 01:58:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:09.683 01:58:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:09.683 01:58:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.683 01:58:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.683 01:58:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:09.683 01:58:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:09.683 01:58:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:09.683 01:58:55 -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:24:09.683 01:58:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:09.683 01:58:55 -- common/autotest_common.sh@10 -- # set +x 00:24:11.590 01:58:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:11.590 01:58:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:11.590 01:58:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:11.590 01:58:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:11.590 01:58:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:11.590 01:58:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:11.590 01:58:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:11.590 01:58:57 -- nvmf/common.sh@294 -- # net_devs=() 00:24:11.590 01:58:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:11.590 01:58:57 -- nvmf/common.sh@295 -- # e810=() 00:24:11.590 01:58:57 -- nvmf/common.sh@295 -- # local -ga e810 00:24:11.590 01:58:57 -- nvmf/common.sh@296 -- # x722=() 00:24:11.590 01:58:57 -- nvmf/common.sh@296 -- # local -ga x722 00:24:11.590 01:58:57 -- nvmf/common.sh@297 -- # mlx=() 00:24:11.590 01:58:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:11.590 01:58:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.590 01:58:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:11.590 01:58:57 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:11.590 01:58:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:11.590 01:58:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.590 01:58:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:11.590 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:11.590 01:58:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:11.590 01:58:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.590 01:58:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:11.590 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:11.591 01:58:57 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:11.591 01:58:57 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:11.591 01:58:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.591 01:58:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.591 01:58:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.591 01:58:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.591 01:58:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:11.591 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:11.591 01:58:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.591 01:58:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.591 01:58:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.591 01:58:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.591 01:58:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.591 01:58:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:11.591 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:11.591 01:58:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.591 01:58:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:11.591 01:58:57 -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:11.591 01:58:57 -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:24:11.591 01:58:57 -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:11.591 01:58:57 -- target/perf_adq.sh@59 -- # adq_reload_driver 00:24:11.591 01:58:57 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:12.161 01:58:57 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:14.065 01:58:59 -- target/perf_adq.sh@54 -- # sleep 5 00:24:19.344 01:59:04 -- target/perf_adq.sh@67 -- # nvmftestinit 00:24:19.344 01:59:04 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:19.344 01:59:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.344 01:59:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:19.344 01:59:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:19.344 01:59:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:19.344 01:59:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.344 01:59:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:19.344 01:59:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.344 01:59:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:19.344 01:59:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:19.344 01:59:04 -- common/autotest_common.sh@10 -- # set +x 00:24:19.344 01:59:04 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:19.344 01:59:04 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:19.344 01:59:04 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:19.344 01:59:04 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:19.344 01:59:04 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:19.344 01:59:04 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:19.344 01:59:04 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:19.344 01:59:04 -- nvmf/common.sh@294 -- # net_devs=() 00:24:19.344 01:59:04 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:19.344 01:59:04 -- nvmf/common.sh@295 -- # e810=() 00:24:19.344 01:59:04 -- nvmf/common.sh@295 -- # local -ga e810 00:24:19.344 01:59:04 -- nvmf/common.sh@296 -- # x722=() 00:24:19.344 01:59:04 -- nvmf/common.sh@296 -- # local -ga x722 00:24:19.344 01:59:04 -- nvmf/common.sh@297 -- # mlx=() 00:24:19.344 01:59:04 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:19.344 01:59:04 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.344 01:59:04 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:19.344 01:59:04 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:19.344 01:59:04 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:19.344 01:59:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.344 01:59:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:19.344 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:19.344 01:59:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:19.344 01:59:04 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:19.344 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:19.344 01:59:04 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:19.344 01:59:04 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:19.344 01:59:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.344 01:59:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.344 01:59:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.344 01:59:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.344 01:59:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:19.344 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:19.344 01:59:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.344 01:59:04 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:19.345 01:59:04 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.345 01:59:04 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:19.345 01:59:04 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.345 01:59:04 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:19.345 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:19.345 01:59:04 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.345 01:59:04 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:19.345 01:59:04 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:19.345 01:59:04 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:19.345 01:59:04 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:19.345 01:59:04 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:19.345 01:59:04 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.345 01:59:04 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.345 01:59:04 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.345 01:59:04 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:19.345 01:59:04 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.345 01:59:04 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.345 01:59:04 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:19.345 01:59:04 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.345 01:59:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.345 01:59:04 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:19.345 01:59:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:19.345 01:59:04 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.345 01:59:04 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.345 01:59:04 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.345 01:59:04 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.345 01:59:04 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:19.345 01:59:04 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.345 01:59:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.345 01:59:04 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.345 01:59:04 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:19.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.127 ms 00:24:19.345 00:24:19.345 --- 10.0.0.2 ping statistics --- 00:24:19.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.345 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:24:19.345 01:59:04 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:24:19.345 00:24:19.345 --- 10.0.0.1 ping statistics --- 00:24:19.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.345 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:24:19.345 01:59:04 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.345 01:59:04 -- nvmf/common.sh@410 -- # return 0 00:24:19.345 01:59:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:19.345 01:59:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.345 01:59:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:19.345 01:59:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:19.345 01:59:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.345 01:59:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:19.345 01:59:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:19.345 01:59:04 -- target/perf_adq.sh@68 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:19.345 01:59:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:19.345 01:59:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:19.345 01:59:04 -- common/autotest_common.sh@10 -- # set +x 00:24:19.345 01:59:04 -- nvmf/common.sh@469 -- # nvmfpid=2225360 00:24:19.345 01:59:04 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:19.345 01:59:04 -- nvmf/common.sh@470 -- # waitforlisten 2225360 00:24:19.345 01:59:04 -- common/autotest_common.sh@819 -- # '[' -z 2225360 ']' 00:24:19.345 01:59:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.345 01:59:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:19.345 01:59:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.345 01:59:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:19.345 01:59:04 -- common/autotest_common.sh@10 -- # set +x 00:24:19.345 [2024-04-15 01:59:04.928928] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:24:19.345 [2024-04-15 01:59:04.929029] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.345 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.603 [2024-04-15 01:59:05.000544] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.603 [2024-04-15 01:59:05.089990] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:19.603 [2024-04-15 01:59:05.090201] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.603 [2024-04-15 01:59:05.090222] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.603 [2024-04-15 01:59:05.090237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.603 [2024-04-15 01:59:05.090323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.603 [2024-04-15 01:59:05.090395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.603 [2024-04-15 01:59:05.090492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.603 [2024-04-15 01:59:05.090495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:19.603 01:59:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:19.603 01:59:05 -- common/autotest_common.sh@852 -- # return 0 00:24:19.603 01:59:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:19.603 01:59:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:19.603 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.603 01:59:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.603 01:59:05 -- target/perf_adq.sh@69 -- # adq_configure_nvmf_target 0 00:24:19.603 01:59:05 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:24:19.603 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.604 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.604 01:59:05 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:19.604 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.604 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:24:19.864 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.864 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 [2024-04-15 01:59:05.262930] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:19.864 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.864 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 Malloc1 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:19.864 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.864 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:19.864 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.864 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:19.864 01:59:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:19.864 01:59:05 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 [2024-04-15 01:59:05.314239] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.864 01:59:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:19.864 01:59:05 -- target/perf_adq.sh@73 -- # perfpid=2225430 00:24:19.864 01:59:05 -- target/perf_adq.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:19.864 01:59:05 -- target/perf_adq.sh@74 -- # sleep 2 00:24:19.864 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.768 01:59:07 -- target/perf_adq.sh@76 -- # rpc_cmd nvmf_get_stats 00:24:21.768 01:59:07 -- target/perf_adq.sh@76 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:24:21.768 01:59:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:21.768 01:59:07 -- target/perf_adq.sh@76 -- # wc -l 00:24:21.768 01:59:07 -- common/autotest_common.sh@10 -- # set +x 00:24:21.768 01:59:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:21.768 01:59:07 -- target/perf_adq.sh@76 -- # count=4 00:24:21.768 01:59:07 -- target/perf_adq.sh@77 -- # [[ 4 -ne 4 ]] 00:24:21.768 01:59:07 -- target/perf_adq.sh@81 -- # wait 2225430 00:24:29.877 Initializing NVMe Controllers 00:24:29.877 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:29.877 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:29.877 Initialization complete. Launching workers. 00:24:29.877 ======================================================== 00:24:29.877 Latency(us) 00:24:29.877 Device Information : IOPS MiB/s Average min max 00:24:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11854.81 46.31 5398.98 1416.18 8574.33 00:24:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11862.71 46.34 5406.04 1692.33 44499.21 00:24:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 12053.51 47.08 5309.73 2745.25 9897.26 00:24:29.877 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6371.10 24.89 10048.85 2984.82 18039.28 00:24:29.877 ======================================================== 00:24:29.877 Total : 42142.12 164.62 6078.41 1416.18 44499.21 00:24:29.877 00:24:29.877 01:59:15 -- target/perf_adq.sh@82 -- # nvmftestfini 00:24:29.877 01:59:15 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:29.877 01:59:15 -- nvmf/common.sh@116 -- # sync 00:24:29.877 01:59:15 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:29.877 01:59:15 -- nvmf/common.sh@119 -- # set +e 00:24:29.877 01:59:15 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:29.877 01:59:15 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:29.877 rmmod nvme_tcp 00:24:29.877 rmmod nvme_fabrics 00:24:30.136 rmmod nvme_keyring 00:24:30.136 01:59:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:30.136 01:59:15 -- nvmf/common.sh@123 -- # set -e 00:24:30.136 01:59:15 -- nvmf/common.sh@124 -- # return 0 00:24:30.136 01:59:15 -- nvmf/common.sh@477 -- # '[' -n 2225360 ']' 00:24:30.136 01:59:15 -- nvmf/common.sh@478 -- # killprocess 2225360 00:24:30.136 01:59:15 -- common/autotest_common.sh@926 -- # '[' -z 2225360 ']' 00:24:30.136 01:59:15 -- common/autotest_common.sh@930 -- # kill -0 2225360 00:24:30.136 01:59:15 -- common/autotest_common.sh@931 -- # uname 00:24:30.136 01:59:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:30.136 01:59:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2225360 00:24:30.136 01:59:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:30.136 01:59:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:30.136 01:59:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2225360' 00:24:30.136 killing process with pid 2225360 00:24:30.136 01:59:15 -- common/autotest_common.sh@945 -- # kill 2225360 00:24:30.136 01:59:15 -- common/autotest_common.sh@950 -- # wait 2225360 00:24:30.395 01:59:15 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:30.395 01:59:15 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:30.395 01:59:15 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:30.395 01:59:15 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:30.395 01:59:15 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:30.395 01:59:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.395 01:59:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.395 01:59:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:32.302 01:59:17 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:32.302 01:59:17 -- target/perf_adq.sh@84 -- # adq_reload_driver 00:24:32.302 01:59:17 -- target/perf_adq.sh@52 -- # rmmod ice 00:24:33.240 01:59:18 -- target/perf_adq.sh@53 -- # modprobe ice 00:24:35.147 01:59:20 -- target/perf_adq.sh@54 -- # sleep 5 00:24:40.508 01:59:25 -- target/perf_adq.sh@87 -- # nvmftestinit 00:24:40.508 01:59:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:40.508 01:59:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:40.509 01:59:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:40.509 01:59:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:40.509 01:59:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:40.509 01:59:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.509 01:59:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.509 01:59:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:40.509 01:59:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:40.509 01:59:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:40.509 01:59:25 -- common/autotest_common.sh@10 -- # set +x 00:24:40.509 01:59:25 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.509 01:59:25 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.509 01:59:25 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.509 01:59:25 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.509 01:59:25 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.509 01:59:25 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.509 01:59:25 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.509 01:59:25 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.509 01:59:25 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.509 01:59:25 -- nvmf/common.sh@295 -- # e810=() 00:24:40.509 01:59:25 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.509 01:59:25 -- nvmf/common.sh@296 -- # x722=() 00:24:40.509 01:59:25 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.509 01:59:25 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.509 01:59:25 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.509 01:59:25 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.509 01:59:25 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.509 01:59:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:40.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:40.509 01:59:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.509 01:59:25 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:40.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:40.509 01:59:25 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.509 01:59:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.509 01:59:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.509 01:59:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:40.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:40.509 01:59:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.509 01:59:25 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.509 01:59:25 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.509 01:59:25 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:40.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:40.509 01:59:25 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.509 01:59:25 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:40.509 01:59:25 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.509 01:59:25 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.509 01:59:25 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:40.509 01:59:25 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.509 01:59:25 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.509 01:59:25 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:40.509 01:59:25 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.509 01:59:25 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.509 01:59:25 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:40.509 01:59:25 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:40.509 01:59:25 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.509 01:59:25 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.509 01:59:25 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.509 01:59:25 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.509 01:59:25 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:40.509 01:59:25 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.509 01:59:25 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.509 01:59:25 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.509 01:59:25 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:40.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:24:40.509 00:24:40.509 --- 10.0.0.2 ping statistics --- 00:24:40.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.509 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:24:40.509 01:59:25 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:24:40.509 00:24:40.509 --- 10.0.0.1 ping statistics --- 00:24:40.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.509 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:24:40.509 01:59:25 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.509 01:59:25 -- nvmf/common.sh@410 -- # return 0 00:24:40.509 01:59:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.509 01:59:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.509 01:59:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:40.509 01:59:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.509 01:59:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:40.509 01:59:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:40.509 01:59:25 -- target/perf_adq.sh@88 -- # adq_configure_driver 00:24:40.509 01:59:25 -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:24:40.509 01:59:25 -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:24:40.509 01:59:25 -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:24:40.509 net.core.busy_poll = 1 00:24:40.509 01:59:25 -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:24:40.509 net.core.busy_read = 1 00:24:40.509 01:59:25 -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:24:40.509 01:59:25 -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:24:40.509 01:59:25 -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:24:40.509 01:59:25 -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:24:40.509 01:59:25 -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:24:40.509 01:59:25 -- target/perf_adq.sh@89 -- # nvmfappstart -m 0xF --wait-for-rpc 00:24:40.509 01:59:25 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:40.509 01:59:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:40.509 01:59:25 -- common/autotest_common.sh@10 -- # set +x 00:24:40.509 01:59:25 -- nvmf/common.sh@469 -- # nvmfpid=2228119 00:24:40.509 01:59:25 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:24:40.509 01:59:25 -- nvmf/common.sh@470 -- # waitforlisten 2228119 00:24:40.509 01:59:25 -- common/autotest_common.sh@819 -- # '[' -z 2228119 ']' 00:24:40.509 01:59:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.509 01:59:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:40.509 01:59:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.509 01:59:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:40.509 01:59:25 -- common/autotest_common.sh@10 -- # set +x 00:24:40.509 [2024-04-15 01:59:25.839143] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:24:40.509 [2024-04-15 01:59:25.839231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.510 EAL: No free 2048 kB hugepages reported on node 1 00:24:40.510 [2024-04-15 01:59:25.906422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:40.510 [2024-04-15 01:59:25.994991] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:40.510 [2024-04-15 01:59:25.995149] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.510 [2024-04-15 01:59:25.995168] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.510 [2024-04-15 01:59:25.995180] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.510 [2024-04-15 01:59:25.995231] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:40.510 [2024-04-15 01:59:25.995293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:40.510 [2024-04-15 01:59:25.995359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:40.510 [2024-04-15 01:59:25.995362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.510 01:59:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:40.510 01:59:26 -- common/autotest_common.sh@852 -- # return 0 00:24:40.510 01:59:26 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:40.510 01:59:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:40.510 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.510 01:59:26 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.510 01:59:26 -- target/perf_adq.sh@90 -- # adq_configure_nvmf_target 1 00:24:40.510 01:59:26 -- target/perf_adq.sh@42 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:24:40.510 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.510 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.510 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.510 01:59:26 -- target/perf_adq.sh@43 -- # rpc_cmd framework_start_init 00:24:40.510 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.510 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@44 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:24:40.770 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.770 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 [2024-04-15 01:59:26.196894] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@45 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:40.770 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.770 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 Malloc1 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@46 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:40.770 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.770 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:40.770 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.770 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.770 01:59:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:40.770 01:59:26 -- common/autotest_common.sh@10 -- # set +x 00:24:40.770 [2024-04-15 01:59:26.250072] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.770 01:59:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:40.770 01:59:26 -- target/perf_adq.sh@94 -- # perfpid=2228154 00:24:40.770 01:59:26 -- target/perf_adq.sh@95 -- # sleep 2 00:24:40.770 01:59:26 -- target/perf_adq.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:24:40.770 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.676 01:59:28 -- target/perf_adq.sh@97 -- # rpc_cmd nvmf_get_stats 00:24:42.676 01:59:28 -- target/perf_adq.sh@97 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:24:42.676 01:59:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:42.676 01:59:28 -- target/perf_adq.sh@97 -- # wc -l 00:24:42.676 01:59:28 -- common/autotest_common.sh@10 -- # set +x 00:24:42.676 01:59:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:42.676 01:59:28 -- target/perf_adq.sh@97 -- # count=2 00:24:42.676 01:59:28 -- target/perf_adq.sh@98 -- # [[ 2 -lt 2 ]] 00:24:42.676 01:59:28 -- target/perf_adq.sh@103 -- # wait 2228154 00:24:50.803 Initializing NVMe Controllers 00:24:50.803 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:50.803 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:50.803 Initialization complete. Launching workers. 00:24:50.803 ======================================================== 00:24:50.803 Latency(us) 00:24:50.803 Device Information : IOPS MiB/s Average min max 00:24:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5028.60 19.64 12732.11 2265.16 58379.18 00:24:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5628.60 21.99 11371.69 1901.85 56623.83 00:24:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13229.10 51.68 4838.83 1701.58 7643.40 00:24:50.803 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4849.30 18.94 13197.65 1881.31 58759.85 00:24:50.803 ======================================================== 00:24:50.803 Total : 28735.60 112.25 8910.35 1701.58 58759.85 00:24:50.803 00:24:50.803 01:59:36 -- target/perf_adq.sh@104 -- # nvmftestfini 00:24:50.803 01:59:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:50.803 01:59:36 -- nvmf/common.sh@116 -- # sync 00:24:50.803 01:59:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:50.803 01:59:36 -- nvmf/common.sh@119 -- # set +e 00:24:50.803 01:59:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:50.803 01:59:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:50.803 rmmod nvme_tcp 00:24:50.803 rmmod nvme_fabrics 00:24:50.803 rmmod nvme_keyring 00:24:50.803 01:59:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:50.803 01:59:36 -- nvmf/common.sh@123 -- # set -e 00:24:50.803 01:59:36 -- nvmf/common.sh@124 -- # return 0 00:24:50.803 01:59:36 -- nvmf/common.sh@477 -- # '[' -n 2228119 ']' 00:24:50.803 01:59:36 -- nvmf/common.sh@478 -- # killprocess 2228119 00:24:50.803 01:59:36 -- common/autotest_common.sh@926 -- # '[' -z 2228119 ']' 00:24:50.803 01:59:36 -- common/autotest_common.sh@930 -- # kill -0 2228119 00:24:50.803 01:59:36 -- common/autotest_common.sh@931 -- # uname 00:24:50.803 01:59:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:50.803 01:59:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2228119 00:24:51.061 01:59:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:51.061 01:59:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:51.061 01:59:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2228119' 00:24:51.061 killing process with pid 2228119 00:24:51.061 01:59:36 -- common/autotest_common.sh@945 -- # kill 2228119 00:24:51.061 01:59:36 -- common/autotest_common.sh@950 -- # wait 2228119 00:24:51.320 01:59:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:51.320 01:59:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:51.320 01:59:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:51.320 01:59:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:51.320 01:59:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:51.320 01:59:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.320 01:59:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:51.320 01:59:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.223 01:59:38 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:24:53.223 01:59:38 -- target/perf_adq.sh@106 -- # trap - SIGINT SIGTERM EXIT 00:24:53.223 00:24:53.223 real 0m43.585s 00:24:53.223 user 2m29.727s 00:24:53.223 sys 0m13.156s 00:24:53.223 01:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:53.223 01:59:38 -- common/autotest_common.sh@10 -- # set +x 00:24:53.223 ************************************ 00:24:53.223 END TEST nvmf_perf_adq 00:24:53.223 ************************************ 00:24:53.223 01:59:38 -- nvmf/nvmf.sh@80 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:53.223 01:59:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:53.223 01:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.223 01:59:38 -- common/autotest_common.sh@10 -- # set +x 00:24:53.223 ************************************ 00:24:53.223 START TEST nvmf_shutdown 00:24:53.223 ************************************ 00:24:53.224 01:59:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:53.224 * Looking for test storage... 00:24:53.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:53.224 01:59:38 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.224 01:59:38 -- nvmf/common.sh@7 -- # uname -s 00:24:53.224 01:59:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.224 01:59:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.224 01:59:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.224 01:59:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.224 01:59:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.224 01:59:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.224 01:59:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.224 01:59:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.224 01:59:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.224 01:59:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.224 01:59:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.224 01:59:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:53.224 01:59:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.224 01:59:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.224 01:59:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.224 01:59:38 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.224 01:59:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.224 01:59:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.224 01:59:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.224 01:59:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 01:59:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 01:59:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 01:59:38 -- paths/export.sh@5 -- # export PATH 00:24:53.224 01:59:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.224 01:59:38 -- nvmf/common.sh@46 -- # : 0 00:24:53.224 01:59:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:53.224 01:59:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:53.224 01:59:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:53.224 01:59:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.224 01:59:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.224 01:59:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:53.224 01:59:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:53.224 01:59:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:53.224 01:59:38 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.224 01:59:38 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.224 01:59:38 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:53.224 01:59:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:24:53.224 01:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:53.224 01:59:38 -- common/autotest_common.sh@10 -- # set +x 00:24:53.224 ************************************ 00:24:53.224 START TEST nvmf_shutdown_tc1 00:24:53.224 ************************************ 00:24:53.224 01:59:38 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc1 00:24:53.224 01:59:38 -- target/shutdown.sh@74 -- # starttarget 00:24:53.224 01:59:38 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:53.224 01:59:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:53.224 01:59:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.224 01:59:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:53.224 01:59:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:53.224 01:59:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:53.224 01:59:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.224 01:59:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:53.224 01:59:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.224 01:59:38 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:53.224 01:59:38 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:53.224 01:59:38 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:53.224 01:59:38 -- common/autotest_common.sh@10 -- # set +x 00:24:55.131 01:59:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:55.131 01:59:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:55.131 01:59:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:55.131 01:59:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:55.131 01:59:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:55.131 01:59:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:55.131 01:59:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:55.131 01:59:40 -- nvmf/common.sh@294 -- # net_devs=() 00:24:55.131 01:59:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:55.131 01:59:40 -- nvmf/common.sh@295 -- # e810=() 00:24:55.131 01:59:40 -- nvmf/common.sh@295 -- # local -ga e810 00:24:55.131 01:59:40 -- nvmf/common.sh@296 -- # x722=() 00:24:55.131 01:59:40 -- nvmf/common.sh@296 -- # local -ga x722 00:24:55.131 01:59:40 -- nvmf/common.sh@297 -- # mlx=() 00:24:55.131 01:59:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:55.131 01:59:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.131 01:59:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:55.131 01:59:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:24:55.131 01:59:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:55.131 01:59:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.131 01:59:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:55.131 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:55.131 01:59:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:55.131 01:59:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:55.131 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:55.131 01:59:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.131 01:59:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:24:55.132 01:59:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:55.132 01:59:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:24:55.132 01:59:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:24:55.132 01:59:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.132 01:59:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.132 01:59:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.132 01:59:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.132 01:59:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:55.132 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:55.132 01:59:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.132 01:59:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:55.132 01:59:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.132 01:59:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:55.132 01:59:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.132 01:59:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:55.132 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:55.132 01:59:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.132 01:59:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:55.132 01:59:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:55.132 01:59:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:55.132 01:59:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:24:55.132 01:59:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:24:55.132 01:59:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.132 01:59:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.132 01:59:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.132 01:59:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:24:55.132 01:59:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.132 01:59:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.132 01:59:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:24:55.132 01:59:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.132 01:59:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.132 01:59:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:24:55.391 01:59:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:24:55.391 01:59:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.391 01:59:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.391 01:59:40 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.391 01:59:40 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.391 01:59:40 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:24:55.391 01:59:40 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.391 01:59:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.391 01:59:40 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.391 01:59:40 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:24:55.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:24:55.391 00:24:55.391 --- 10.0.0.2 ping statistics --- 00:24:55.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.391 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:55.391 01:59:40 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:24:55.391 00:24:55.391 --- 10.0.0.1 ping statistics --- 00:24:55.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.391 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:24:55.391 01:59:40 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.391 01:59:40 -- nvmf/common.sh@410 -- # return 0 00:24:55.391 01:59:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:55.391 01:59:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.391 01:59:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:55.391 01:59:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:55.391 01:59:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.391 01:59:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:55.391 01:59:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:55.391 01:59:40 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:55.391 01:59:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:55.391 01:59:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:55.391 01:59:40 -- common/autotest_common.sh@10 -- # set +x 00:24:55.391 01:59:40 -- nvmf/common.sh@469 -- # nvmfpid=2231353 00:24:55.391 01:59:40 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:55.391 01:59:40 -- nvmf/common.sh@470 -- # waitforlisten 2231353 00:24:55.391 01:59:40 -- common/autotest_common.sh@819 -- # '[' -z 2231353 ']' 00:24:55.391 01:59:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.391 01:59:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:55.391 01:59:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.391 01:59:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:55.391 01:59:40 -- common/autotest_common.sh@10 -- # set +x 00:24:55.391 [2024-04-15 01:59:40.993239] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:24:55.391 [2024-04-15 01:59:40.993311] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.391 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.650 [2024-04-15 01:59:41.062922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:55.650 [2024-04-15 01:59:41.152525] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:55.650 [2024-04-15 01:59:41.152681] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.650 [2024-04-15 01:59:41.152709] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.650 [2024-04-15 01:59:41.152725] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.650 [2024-04-15 01:59:41.152822] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.650 [2024-04-15 01:59:41.152919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.650 [2024-04-15 01:59:41.152986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:55.650 [2024-04-15 01:59:41.152988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:56.583 01:59:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:56.583 01:59:41 -- common/autotest_common.sh@852 -- # return 0 00:24:56.583 01:59:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:56.583 01:59:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:56.583 01:59:41 -- common/autotest_common.sh@10 -- # set +x 00:24:56.583 01:59:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.583 01:59:41 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.583 01:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.583 01:59:41 -- common/autotest_common.sh@10 -- # set +x 00:24:56.583 [2024-04-15 01:59:41.952584] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.583 01:59:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.583 01:59:41 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:56.583 01:59:41 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:56.583 01:59:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:56.583 01:59:41 -- common/autotest_common.sh@10 -- # set +x 00:24:56.583 01:59:41 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:56.583 01:59:41 -- target/shutdown.sh@28 -- # cat 00:24:56.583 01:59:41 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:56.583 01:59:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:56.583 01:59:41 -- common/autotest_common.sh@10 -- # set +x 00:24:56.583 Malloc1 00:24:56.583 [2024-04-15 01:59:42.027425] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.583 Malloc2 00:24:56.583 Malloc3 00:24:56.583 Malloc4 00:24:56.583 Malloc5 00:24:56.841 Malloc6 00:24:56.841 Malloc7 00:24:56.841 Malloc8 00:24:56.841 Malloc9 00:24:56.841 Malloc10 00:24:56.841 01:59:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:56.841 01:59:42 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:56.841 01:59:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:56.841 01:59:42 -- common/autotest_common.sh@10 -- # set +x 00:24:56.841 01:59:42 -- target/shutdown.sh@78 -- # perfpid=2231669 00:24:56.841 01:59:42 -- target/shutdown.sh@79 -- # waitforlisten 2231669 /var/tmp/bdevperf.sock 00:24:56.841 01:59:42 -- common/autotest_common.sh@819 -- # '[' -z 2231669 ']' 00:24:56.841 01:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.841 01:59:42 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:56.841 01:59:42 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:56.841 01:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:56.841 01:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.841 01:59:42 -- nvmf/common.sh@520 -- # config=() 00:24:56.841 01:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:56.841 01:59:42 -- nvmf/common.sh@520 -- # local subsystem config 00:24:56.841 01:59:42 -- common/autotest_common.sh@10 -- # set +x 00:24:56.841 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:56.841 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:56.841 { 00:24:56.841 "params": { 00:24:56.841 "name": "Nvme$subsystem", 00:24:56.841 "trtype": "$TEST_TRANSPORT", 00:24:56.841 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:56.841 "adrfam": "ipv4", 00:24:56.841 "trsvcid": "$NVMF_PORT", 00:24:56.841 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:56.841 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:56.841 "hdgst": ${hdgst:-false}, 00:24:56.841 "ddgst": ${ddgst:-false} 00:24:56.841 }, 00:24:56.841 "method": "bdev_nvme_attach_controller" 00:24:56.841 } 00:24:56.841 EOF 00:24:56.841 )") 00:24:56.841 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:56.841 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:57.100 { 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme$subsystem", 00:24:57.100 "trtype": "$TEST_TRANSPORT", 00:24:57.100 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "$NVMF_PORT", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:57.100 "hdgst": ${hdgst:-false}, 00:24:57.100 "ddgst": ${ddgst:-false} 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 } 00:24:57.100 EOF 00:24:57.100 )") 00:24:57.100 01:59:42 -- nvmf/common.sh@542 -- # cat 00:24:57.100 01:59:42 -- nvmf/common.sh@544 -- # jq . 00:24:57.100 01:59:42 -- nvmf/common.sh@545 -- # IFS=, 00:24:57.100 01:59:42 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme1", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme2", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme3", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme4", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme5", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme6", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme7", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme8", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme9", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 },{ 00:24:57.100 "params": { 00:24:57.100 "name": "Nvme10", 00:24:57.100 "trtype": "tcp", 00:24:57.100 "traddr": "10.0.0.2", 00:24:57.100 "adrfam": "ipv4", 00:24:57.100 "trsvcid": "4420", 00:24:57.100 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:57.100 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:57.100 "hdgst": false, 00:24:57.100 "ddgst": false 00:24:57.100 }, 00:24:57.100 "method": "bdev_nvme_attach_controller" 00:24:57.100 }' 00:24:57.100 [2024-04-15 01:59:42.525415] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:24:57.100 [2024-04-15 01:59:42.525499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:57.100 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.100 [2024-04-15 01:59:42.589811] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.100 [2024-04-15 01:59:42.674067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.005 01:59:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.005 01:59:44 -- common/autotest_common.sh@852 -- # return 0 00:24:59.005 01:59:44 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:59.005 01:59:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:24:59.005 01:59:44 -- common/autotest_common.sh@10 -- # set +x 00:24:59.005 01:59:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:24:59.005 01:59:44 -- target/shutdown.sh@83 -- # kill -9 2231669 00:24:59.005 01:59:44 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:59.005 01:59:44 -- target/shutdown.sh@87 -- # sleep 1 00:24:59.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2231669 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:59.571 01:59:45 -- target/shutdown.sh@88 -- # kill -0 2231353 00:24:59.571 01:59:45 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:59.571 01:59:45 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:59.571 01:59:45 -- nvmf/common.sh@520 -- # config=() 00:24:59.571 01:59:45 -- nvmf/common.sh@520 -- # local subsystem config 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.571 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.571 { 00:24:59.571 "params": { 00:24:59.571 "name": "Nvme$subsystem", 00:24:59.571 "trtype": "$TEST_TRANSPORT", 00:24:59.571 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.571 "adrfam": "ipv4", 00:24:59.571 "trsvcid": "$NVMF_PORT", 00:24:59.571 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.571 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.571 "hdgst": ${hdgst:-false}, 00:24:59.571 "ddgst": ${ddgst:-false} 00:24:59.571 }, 00:24:59.571 "method": "bdev_nvme_attach_controller" 00:24:59.571 } 00:24:59.571 EOF 00:24:59.571 )") 00:24:59.571 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.830 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.830 { 00:24:59.830 "params": { 00:24:59.830 "name": "Nvme$subsystem", 00:24:59.830 "trtype": "$TEST_TRANSPORT", 00:24:59.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.830 "adrfam": "ipv4", 00:24:59.830 "trsvcid": "$NVMF_PORT", 00:24:59.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.830 "hdgst": ${hdgst:-false}, 00:24:59.830 "ddgst": ${ddgst:-false} 00:24:59.830 }, 00:24:59.830 "method": "bdev_nvme_attach_controller" 00:24:59.830 } 00:24:59.830 EOF 00:24:59.830 )") 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.830 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.830 { 00:24:59.830 "params": { 00:24:59.830 "name": "Nvme$subsystem", 00:24:59.830 "trtype": "$TEST_TRANSPORT", 00:24:59.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.830 "adrfam": "ipv4", 00:24:59.830 "trsvcid": "$NVMF_PORT", 00:24:59.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.830 "hdgst": ${hdgst:-false}, 00:24:59.830 "ddgst": ${ddgst:-false} 00:24:59.830 }, 00:24:59.830 "method": "bdev_nvme_attach_controller" 00:24:59.830 } 00:24:59.830 EOF 00:24:59.830 )") 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.830 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.830 { 00:24:59.830 "params": { 00:24:59.830 "name": "Nvme$subsystem", 00:24:59.830 "trtype": "$TEST_TRANSPORT", 00:24:59.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.830 "adrfam": "ipv4", 00:24:59.830 "trsvcid": "$NVMF_PORT", 00:24:59.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.830 "hdgst": ${hdgst:-false}, 00:24:59.830 "ddgst": ${ddgst:-false} 00:24:59.830 }, 00:24:59.830 "method": "bdev_nvme_attach_controller" 00:24:59.830 } 00:24:59.830 EOF 00:24:59.830 )") 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.830 01:59:45 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:59.830 { 00:24:59.830 "params": { 00:24:59.830 "name": "Nvme$subsystem", 00:24:59.830 "trtype": "$TEST_TRANSPORT", 00:24:59.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:59.830 "adrfam": "ipv4", 00:24:59.830 "trsvcid": "$NVMF_PORT", 00:24:59.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:59.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:59.830 "hdgst": ${hdgst:-false}, 00:24:59.830 "ddgst": ${ddgst:-false} 00:24:59.830 }, 00:24:59.830 "method": "bdev_nvme_attach_controller" 00:24:59.830 } 00:24:59.830 EOF 00:24:59.830 )") 00:24:59.830 01:59:45 -- nvmf/common.sh@542 -- # cat 00:24:59.830 01:59:45 -- nvmf/common.sh@544 -- # jq . 00:24:59.830 01:59:45 -- nvmf/common.sh@545 -- # IFS=, 00:24:59.831 01:59:45 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme1", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme2", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme3", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme4", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme5", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme6", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme7", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme8", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme9", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 },{ 00:24:59.831 "params": { 00:24:59.831 "name": "Nvme10", 00:24:59.831 "trtype": "tcp", 00:24:59.831 "traddr": "10.0.0.2", 00:24:59.831 "adrfam": "ipv4", 00:24:59.831 "trsvcid": "4420", 00:24:59.831 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:59.831 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:59.831 "hdgst": false, 00:24:59.831 "ddgst": false 00:24:59.831 }, 00:24:59.831 "method": "bdev_nvme_attach_controller" 00:24:59.831 }' 00:24:59.831 [2024-04-15 01:59:45.240873] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:24:59.831 [2024-04-15 01:59:45.240950] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231975 ] 00:24:59.831 EAL: No free 2048 kB hugepages reported on node 1 00:24:59.831 [2024-04-15 01:59:45.305983] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.831 [2024-04-15 01:59:45.391554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:01.204 Running I/O for 1 seconds... 00:25:02.586 00:25:02.586 Latency(us) 00:25:02.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.586 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme1n1 : 1.09 366.72 22.92 0.00 0.00 170622.55 11213.94 163111.82 00:25:02.586 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme2n1 : 1.08 368.99 23.06 0.00 0.00 168195.16 33787.45 142917.03 00:25:02.586 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme3n1 : 1.08 367.90 22.99 0.00 0.00 166430.46 47380.10 130489.46 00:25:02.586 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme4n1 : 1.08 369.98 23.12 0.00 0.00 165185.36 35535.08 136703.24 00:25:02.586 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme5n1 : 1.09 365.48 22.84 0.00 0.00 166083.57 36505.98 129712.73 00:25:02.586 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme6n1 : 1.09 366.63 22.91 0.00 0.00 164025.15 38447.79 134373.07 00:25:02.586 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.586 Verification LBA range: start 0x0 length 0x400 00:25:02.586 Nvme7n1 : 1.10 393.42 24.59 0.00 0.00 154171.17 9126.49 120392.06 00:25:02.587 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.587 Verification LBA range: start 0x0 length 0x400 00:25:02.587 Nvme8n1 : 1.10 362.84 22.68 0.00 0.00 163585.56 31845.64 138256.69 00:25:02.587 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.587 Verification LBA range: start 0x0 length 0x400 00:25:02.587 Nvme9n1 : 1.10 361.47 22.59 0.00 0.00 164020.76 25437.68 139033.41 00:25:02.587 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:02.587 Verification LBA range: start 0x0 length 0x400 00:25:02.587 Nvme10n1 : 1.11 383.78 23.99 0.00 0.00 154101.95 6505.05 139033.41 00:25:02.587 =================================================================================================================== 00:25:02.587 Total : 3707.21 231.70 0.00 0.00 163497.65 6505.05 163111.82 00:25:02.587 01:59:48 -- target/shutdown.sh@93 -- # stoptarget 00:25:02.587 01:59:48 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:02.587 01:59:48 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:02.587 01:59:48 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:02.587 01:59:48 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:02.587 01:59:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:02.587 01:59:48 -- nvmf/common.sh@116 -- # sync 00:25:02.587 01:59:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:02.587 01:59:48 -- nvmf/common.sh@119 -- # set +e 00:25:02.587 01:59:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:02.587 01:59:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:02.587 rmmod nvme_tcp 00:25:02.587 rmmod nvme_fabrics 00:25:02.587 rmmod nvme_keyring 00:25:02.587 01:59:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:02.587 01:59:48 -- nvmf/common.sh@123 -- # set -e 00:25:02.587 01:59:48 -- nvmf/common.sh@124 -- # return 0 00:25:02.587 01:59:48 -- nvmf/common.sh@477 -- # '[' -n 2231353 ']' 00:25:02.587 01:59:48 -- nvmf/common.sh@478 -- # killprocess 2231353 00:25:02.587 01:59:48 -- common/autotest_common.sh@926 -- # '[' -z 2231353 ']' 00:25:02.587 01:59:48 -- common/autotest_common.sh@930 -- # kill -0 2231353 00:25:02.587 01:59:48 -- common/autotest_common.sh@931 -- # uname 00:25:02.587 01:59:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:02.587 01:59:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2231353 00:25:02.587 01:59:48 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:02.587 01:59:48 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:02.587 01:59:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2231353' 00:25:02.587 killing process with pid 2231353 00:25:02.587 01:59:48 -- common/autotest_common.sh@945 -- # kill 2231353 00:25:02.587 01:59:48 -- common/autotest_common.sh@950 -- # wait 2231353 00:25:03.154 01:59:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:03.154 01:59:48 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:03.154 01:59:48 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:03.154 01:59:48 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.154 01:59:48 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:03.154 01:59:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.154 01:59:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:03.154 01:59:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.057 01:59:50 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:05.316 00:25:05.316 real 0m11.862s 00:25:05.316 user 0m34.319s 00:25:05.316 sys 0m3.235s 00:25:05.316 01:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.316 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:25:05.316 ************************************ 00:25:05.316 END TEST nvmf_shutdown_tc1 00:25:05.316 ************************************ 00:25:05.316 01:59:50 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:25:05.316 01:59:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:05.316 01:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:05.316 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:25:05.316 ************************************ 00:25:05.316 START TEST nvmf_shutdown_tc2 00:25:05.316 ************************************ 00:25:05.316 01:59:50 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc2 00:25:05.316 01:59:50 -- target/shutdown.sh@98 -- # starttarget 00:25:05.316 01:59:50 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:05.316 01:59:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:05.316 01:59:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.316 01:59:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:05.316 01:59:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:05.316 01:59:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:05.316 01:59:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.316 01:59:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:05.316 01:59:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.316 01:59:50 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:05.316 01:59:50 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:05.316 01:59:50 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:05.316 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:25:05.316 01:59:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:05.316 01:59:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:05.316 01:59:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:05.316 01:59:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:05.316 01:59:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:05.316 01:59:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:05.316 01:59:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:05.316 01:59:50 -- nvmf/common.sh@294 -- # net_devs=() 00:25:05.317 01:59:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:05.317 01:59:50 -- nvmf/common.sh@295 -- # e810=() 00:25:05.317 01:59:50 -- nvmf/common.sh@295 -- # local -ga e810 00:25:05.317 01:59:50 -- nvmf/common.sh@296 -- # x722=() 00:25:05.317 01:59:50 -- nvmf/common.sh@296 -- # local -ga x722 00:25:05.317 01:59:50 -- nvmf/common.sh@297 -- # mlx=() 00:25:05.317 01:59:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:05.317 01:59:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.317 01:59:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.317 01:59:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:05.317 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:05.317 01:59:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:05.317 01:59:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:05.317 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:05.317 01:59:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.317 01:59:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.317 01:59:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.317 01:59:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:05.317 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:05.317 01:59:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:05.317 01:59:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.317 01:59:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.317 01:59:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:05.317 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:05.317 01:59:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:05.317 01:59:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:05.317 01:59:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.317 01:59:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.317 01:59:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:05.317 01:59:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.317 01:59:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.317 01:59:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:05.317 01:59:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.317 01:59:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.317 01:59:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:05.317 01:59:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:05.317 01:59:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:05.317 01:59:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:05.317 01:59:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:05.317 01:59:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:05.317 01:59:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:05.317 01:59:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:05.317 01:59:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:05.317 01:59:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:05.317 01:59:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:05.317 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.317 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:25:05.317 00:25:05.317 --- 10.0.0.2 ping statistics --- 00:25:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.317 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:25:05.317 01:59:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:05.317 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.317 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:25:05.317 00:25:05.317 --- 10.0.0.1 ping statistics --- 00:25:05.317 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.317 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:25:05.317 01:59:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.317 01:59:50 -- nvmf/common.sh@410 -- # return 0 00:25:05.317 01:59:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:05.317 01:59:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.317 01:59:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:05.317 01:59:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.317 01:59:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:05.317 01:59:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:05.317 01:59:50 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:05.317 01:59:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:05.317 01:59:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:05.317 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:25:05.317 01:59:50 -- nvmf/common.sh@469 -- # nvmfpid=2232761 00:25:05.317 01:59:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:05.317 01:59:50 -- nvmf/common.sh@470 -- # waitforlisten 2232761 00:25:05.317 01:59:50 -- common/autotest_common.sh@819 -- # '[' -z 2232761 ']' 00:25:05.317 01:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.317 01:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:05.317 01:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:05.317 01:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:05.317 01:59:50 -- common/autotest_common.sh@10 -- # set +x 00:25:05.317 [2024-04-15 01:59:50.956980] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:05.317 [2024-04-15 01:59:50.957086] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:05.576 EAL: No free 2048 kB hugepages reported on node 1 00:25:05.576 [2024-04-15 01:59:51.027873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:05.576 [2024-04-15 01:59:51.116976] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:05.576 [2024-04-15 01:59:51.117140] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:05.576 [2024-04-15 01:59:51.117161] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:05.576 [2024-04-15 01:59:51.117175] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:05.576 [2024-04-15 01:59:51.117270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.576 [2024-04-15 01:59:51.117371] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:05.576 [2024-04-15 01:59:51.117438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:05.576 [2024-04-15 01:59:51.117436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:06.507 01:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:06.508 01:59:51 -- common/autotest_common.sh@852 -- # return 0 00:25:06.508 01:59:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:06.508 01:59:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:06.508 01:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:06.508 01:59:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.508 01:59:51 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.508 01:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.508 01:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:06.508 [2024-04-15 01:59:51.943706] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.508 01:59:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:06.508 01:59:51 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:06.508 01:59:51 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:06.508 01:59:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:06.508 01:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:06.508 01:59:51 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:06.508 01:59:51 -- target/shutdown.sh@28 -- # cat 00:25:06.508 01:59:51 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:06.508 01:59:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:06.508 01:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:06.508 Malloc1 00:25:06.508 [2024-04-15 01:59:52.018560] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:06.508 Malloc2 00:25:06.508 Malloc3 00:25:06.508 Malloc4 00:25:06.764 Malloc5 00:25:06.764 Malloc6 00:25:06.764 Malloc7 00:25:06.764 Malloc8 00:25:06.764 Malloc9 00:25:07.022 Malloc10 00:25:07.022 01:59:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:07.022 01:59:52 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:07.022 01:59:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:07.022 01:59:52 -- common/autotest_common.sh@10 -- # set +x 00:25:07.022 01:59:52 -- target/shutdown.sh@102 -- # perfpid=2232960 00:25:07.022 01:59:52 -- target/shutdown.sh@103 -- # waitforlisten 2232960 /var/tmp/bdevperf.sock 00:25:07.022 01:59:52 -- common/autotest_common.sh@819 -- # '[' -z 2232960 ']' 00:25:07.022 01:59:52 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:07.022 01:59:52 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:07.022 01:59:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:07.022 01:59:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:07.022 01:59:52 -- nvmf/common.sh@520 -- # config=() 00:25:07.022 01:59:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:07.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:07.022 01:59:52 -- nvmf/common.sh@520 -- # local subsystem config 00:25:07.022 01:59:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:07.022 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.022 01:59:52 -- common/autotest_common.sh@10 -- # set +x 00:25:07.022 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.022 { 00:25:07.022 "params": { 00:25:07.022 "name": "Nvme$subsystem", 00:25:07.022 "trtype": "$TEST_TRANSPORT", 00:25:07.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.022 "adrfam": "ipv4", 00:25:07.022 "trsvcid": "$NVMF_PORT", 00:25:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.022 "hdgst": ${hdgst:-false}, 00:25:07.022 "ddgst": ${ddgst:-false} 00:25:07.022 }, 00:25:07.022 "method": "bdev_nvme_attach_controller" 00:25:07.022 } 00:25:07.022 EOF 00:25:07.022 )") 00:25:07.022 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.022 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.022 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.022 { 00:25:07.022 "params": { 00:25:07.022 "name": "Nvme$subsystem", 00:25:07.022 "trtype": "$TEST_TRANSPORT", 00:25:07.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.022 "adrfam": "ipv4", 00:25:07.022 "trsvcid": "$NVMF_PORT", 00:25:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.022 "hdgst": ${hdgst:-false}, 00:25:07.022 "ddgst": ${ddgst:-false} 00:25:07.022 }, 00:25:07.022 "method": "bdev_nvme_attach_controller" 00:25:07.022 } 00:25:07.022 EOF 00:25:07.022 )") 00:25:07.022 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.022 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.022 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.022 { 00:25:07.022 "params": { 00:25:07.022 "name": "Nvme$subsystem", 00:25:07.022 "trtype": "$TEST_TRANSPORT", 00:25:07.022 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.022 "adrfam": "ipv4", 00:25:07.022 "trsvcid": "$NVMF_PORT", 00:25:07.022 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.022 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.022 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:07.023 { 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme$subsystem", 00:25:07.023 "trtype": "$TEST_TRANSPORT", 00:25:07.023 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "$NVMF_PORT", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:07.023 "hdgst": ${hdgst:-false}, 00:25:07.023 "ddgst": ${ddgst:-false} 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 } 00:25:07.023 EOF 00:25:07.023 )") 00:25:07.023 01:59:52 -- nvmf/common.sh@542 -- # cat 00:25:07.023 01:59:52 -- nvmf/common.sh@544 -- # jq . 00:25:07.023 01:59:52 -- nvmf/common.sh@545 -- # IFS=, 00:25:07.023 01:59:52 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme1", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme2", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme3", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme4", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme5", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme6", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme7", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.023 },{ 00:25:07.023 "params": { 00:25:07.023 "name": "Nvme8", 00:25:07.023 "trtype": "tcp", 00:25:07.023 "traddr": "10.0.0.2", 00:25:07.023 "adrfam": "ipv4", 00:25:07.023 "trsvcid": "4420", 00:25:07.023 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:07.023 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:07.023 "hdgst": false, 00:25:07.023 "ddgst": false 00:25:07.023 }, 00:25:07.023 "method": "bdev_nvme_attach_controller" 00:25:07.024 },{ 00:25:07.024 "params": { 00:25:07.024 "name": "Nvme9", 00:25:07.024 "trtype": "tcp", 00:25:07.024 "traddr": "10.0.0.2", 00:25:07.024 "adrfam": "ipv4", 00:25:07.024 "trsvcid": "4420", 00:25:07.024 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:07.024 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:07.024 "hdgst": false, 00:25:07.024 "ddgst": false 00:25:07.024 }, 00:25:07.024 "method": "bdev_nvme_attach_controller" 00:25:07.024 },{ 00:25:07.024 "params": { 00:25:07.024 "name": "Nvme10", 00:25:07.024 "trtype": "tcp", 00:25:07.024 "traddr": "10.0.0.2", 00:25:07.024 "adrfam": "ipv4", 00:25:07.024 "trsvcid": "4420", 00:25:07.024 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:07.024 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:07.024 "hdgst": false, 00:25:07.024 "ddgst": false 00:25:07.024 }, 00:25:07.024 "method": "bdev_nvme_attach_controller" 00:25:07.024 }' 00:25:07.024 [2024-04-15 01:59:52.506838] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:07.024 [2024-04-15 01:59:52.506913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2232960 ] 00:25:07.024 EAL: No free 2048 kB hugepages reported on node 1 00:25:07.024 [2024-04-15 01:59:52.571230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:07.024 [2024-04-15 01:59:52.657252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:08.394 Running I/O for 10 seconds... 00:25:08.652 01:59:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:08.652 01:59:54 -- common/autotest_common.sh@852 -- # return 0 00:25:08.652 01:59:54 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:08.652 01:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.652 01:59:54 -- common/autotest_common.sh@10 -- # set +x 00:25:08.652 01:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.652 01:59:54 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:08.652 01:59:54 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:08.652 01:59:54 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:08.652 01:59:54 -- target/shutdown.sh@57 -- # local ret=1 00:25:08.652 01:59:54 -- target/shutdown.sh@58 -- # local i 00:25:08.652 01:59:54 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:08.652 01:59:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:08.652 01:59:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:08.652 01:59:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:08.652 01:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:08.652 01:59:54 -- common/autotest_common.sh@10 -- # set +x 00:25:08.652 01:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:08.914 01:59:54 -- target/shutdown.sh@60 -- # read_io_count=87 00:25:08.914 01:59:54 -- target/shutdown.sh@63 -- # '[' 87 -ge 100 ']' 00:25:08.914 01:59:54 -- target/shutdown.sh@67 -- # sleep 0.25 00:25:09.171 01:59:54 -- target/shutdown.sh@59 -- # (( i-- )) 00:25:09.171 01:59:54 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:09.171 01:59:54 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:09.171 01:59:54 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:09.171 01:59:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:09.171 01:59:54 -- common/autotest_common.sh@10 -- # set +x 00:25:09.171 01:59:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:09.171 01:59:54 -- target/shutdown.sh@60 -- # read_io_count=167 00:25:09.171 01:59:54 -- target/shutdown.sh@63 -- # '[' 167 -ge 100 ']' 00:25:09.171 01:59:54 -- target/shutdown.sh@64 -- # ret=0 00:25:09.171 01:59:54 -- target/shutdown.sh@65 -- # break 00:25:09.171 01:59:54 -- target/shutdown.sh@69 -- # return 0 00:25:09.171 01:59:54 -- target/shutdown.sh@109 -- # killprocess 2232960 00:25:09.171 01:59:54 -- common/autotest_common.sh@926 -- # '[' -z 2232960 ']' 00:25:09.171 01:59:54 -- common/autotest_common.sh@930 -- # kill -0 2232960 00:25:09.171 01:59:54 -- common/autotest_common.sh@931 -- # uname 00:25:09.171 01:59:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:09.171 01:59:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2232960 00:25:09.171 01:59:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:09.171 01:59:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:09.172 01:59:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2232960' 00:25:09.172 killing process with pid 2232960 00:25:09.172 01:59:54 -- common/autotest_common.sh@945 -- # kill 2232960 00:25:09.172 01:59:54 -- common/autotest_common.sh@950 -- # wait 2232960 00:25:09.172 Received shutdown signal, test time was about 0.713544 seconds 00:25:09.172 00:25:09.172 Latency(us) 00:25:09.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:09.172 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme1n1 : 0.69 395.45 24.72 0.00 0.00 156958.18 23398.78 137479.96 00:25:09.172 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme2n1 : 0.69 393.65 24.60 0.00 0.00 155966.58 22816.24 125829.12 00:25:09.172 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme3n1 : 0.68 335.88 20.99 0.00 0.00 180383.69 22913.33 173985.94 00:25:09.172 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme4n1 : 0.68 398.21 24.89 0.00 0.00 150357.47 23690.05 133596.35 00:25:09.172 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme5n1 : 0.68 397.38 24.84 0.00 0.00 148437.97 27185.30 123498.95 00:25:09.172 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme6n1 : 0.69 329.44 20.59 0.00 0.00 167388.70 23010.42 136703.24 00:25:09.172 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme7n1 : 0.69 391.78 24.49 0.00 0.00 148184.36 22039.51 124275.67 00:25:09.172 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme8n1 : 0.71 319.90 19.99 0.00 0.00 168546.33 26214.40 131266.18 00:25:09.172 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme9n1 : 0.70 387.88 24.24 0.00 0.00 145841.06 20097.71 125829.12 00:25:09.172 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:09.172 Verification LBA range: start 0x0 length 0x400 00:25:09.172 Nvme10n1 : 0.66 344.37 21.52 0.00 0.00 160752.06 23204.60 145247.19 00:25:09.172 =================================================================================================================== 00:25:09.172 Total : 3693.94 230.87 0.00 0.00 157530.79 20097.71 173985.94 00:25:09.430 01:59:54 -- target/shutdown.sh@112 -- # sleep 1 00:25:10.363 01:59:55 -- target/shutdown.sh@113 -- # kill -0 2232761 00:25:10.363 01:59:55 -- target/shutdown.sh@115 -- # stoptarget 00:25:10.363 01:59:55 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:10.363 01:59:55 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:10.363 01:59:55 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:10.363 01:59:55 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:10.363 01:59:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:10.363 01:59:55 -- nvmf/common.sh@116 -- # sync 00:25:10.363 01:59:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:10.363 01:59:55 -- nvmf/common.sh@119 -- # set +e 00:25:10.363 01:59:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:10.363 01:59:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:10.363 rmmod nvme_tcp 00:25:10.621 rmmod nvme_fabrics 00:25:10.621 rmmod nvme_keyring 00:25:10.621 01:59:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:10.621 01:59:56 -- nvmf/common.sh@123 -- # set -e 00:25:10.621 01:59:56 -- nvmf/common.sh@124 -- # return 0 00:25:10.621 01:59:56 -- nvmf/common.sh@477 -- # '[' -n 2232761 ']' 00:25:10.621 01:59:56 -- nvmf/common.sh@478 -- # killprocess 2232761 00:25:10.621 01:59:56 -- common/autotest_common.sh@926 -- # '[' -z 2232761 ']' 00:25:10.621 01:59:56 -- common/autotest_common.sh@930 -- # kill -0 2232761 00:25:10.621 01:59:56 -- common/autotest_common.sh@931 -- # uname 00:25:10.621 01:59:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:10.621 01:59:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2232761 00:25:10.621 01:59:56 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:10.621 01:59:56 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:10.621 01:59:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2232761' 00:25:10.621 killing process with pid 2232761 00:25:10.621 01:59:56 -- common/autotest_common.sh@945 -- # kill 2232761 00:25:10.621 01:59:56 -- common/autotest_common.sh@950 -- # wait 2232761 00:25:11.187 01:59:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:11.187 01:59:56 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:11.187 01:59:56 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:11.188 01:59:56 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:11.188 01:59:56 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:11.188 01:59:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:11.188 01:59:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:11.188 01:59:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.091 01:59:58 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:13.091 00:25:13.091 real 0m7.912s 00:25:13.091 user 0m24.288s 00:25:13.091 sys 0m1.464s 00:25:13.091 01:59:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:13.091 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:25:13.091 ************************************ 00:25:13.091 END TEST nvmf_shutdown_tc2 00:25:13.091 ************************************ 00:25:13.091 01:59:58 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:25:13.091 01:59:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:13.091 01:59:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:13.091 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:25:13.091 ************************************ 00:25:13.091 START TEST nvmf_shutdown_tc3 00:25:13.091 ************************************ 00:25:13.091 01:59:58 -- common/autotest_common.sh@1104 -- # nvmf_shutdown_tc3 00:25:13.091 01:59:58 -- target/shutdown.sh@120 -- # starttarget 00:25:13.091 01:59:58 -- target/shutdown.sh@15 -- # nvmftestinit 00:25:13.091 01:59:58 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:13.091 01:59:58 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:13.091 01:59:58 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:13.091 01:59:58 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:13.091 01:59:58 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:13.092 01:59:58 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:13.092 01:59:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:13.092 01:59:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:13.092 01:59:58 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:13.092 01:59:58 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:13.092 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:25:13.092 01:59:58 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:13.092 01:59:58 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:13.092 01:59:58 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:13.092 01:59:58 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:13.092 01:59:58 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:13.092 01:59:58 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:13.092 01:59:58 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:13.092 01:59:58 -- nvmf/common.sh@294 -- # net_devs=() 00:25:13.092 01:59:58 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:13.092 01:59:58 -- nvmf/common.sh@295 -- # e810=() 00:25:13.092 01:59:58 -- nvmf/common.sh@295 -- # local -ga e810 00:25:13.092 01:59:58 -- nvmf/common.sh@296 -- # x722=() 00:25:13.092 01:59:58 -- nvmf/common.sh@296 -- # local -ga x722 00:25:13.092 01:59:58 -- nvmf/common.sh@297 -- # mlx=() 00:25:13.092 01:59:58 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:13.092 01:59:58 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:13.092 01:59:58 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:13.092 01:59:58 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:13.092 01:59:58 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.092 01:59:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:13.092 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:13.092 01:59:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:13.092 01:59:58 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:13.092 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:13.092 01:59:58 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.092 01:59:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.092 01:59:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.092 01:59:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:13.092 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:13.092 01:59:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.092 01:59:58 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:13.092 01:59:58 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:13.092 01:59:58 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:13.092 01:59:58 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:13.092 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:13.092 01:59:58 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:13.092 01:59:58 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:13.092 01:59:58 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:13.092 01:59:58 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:13.092 01:59:58 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:13.092 01:59:58 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:13.092 01:59:58 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:13.092 01:59:58 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:13.092 01:59:58 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:13.092 01:59:58 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:13.092 01:59:58 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:13.092 01:59:58 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:13.092 01:59:58 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:13.092 01:59:58 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:13.092 01:59:58 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:13.092 01:59:58 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:13.092 01:59:58 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:13.351 01:59:58 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:13.351 01:59:58 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:13.351 01:59:58 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:13.351 01:59:58 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:13.351 01:59:58 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:13.351 01:59:58 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:13.351 01:59:58 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:13.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:13.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.236 ms 00:25:13.351 00:25:13.351 --- 10.0.0.2 ping statistics --- 00:25:13.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.351 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:25:13.351 01:59:58 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:13.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:13.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:25:13.351 00:25:13.351 --- 10.0.0.1 ping statistics --- 00:25:13.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:13.351 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:25:13.351 01:59:58 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:13.351 01:59:58 -- nvmf/common.sh@410 -- # return 0 00:25:13.351 01:59:58 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:13.351 01:59:58 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:13.351 01:59:58 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:13.351 01:59:58 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:13.351 01:59:58 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:13.351 01:59:58 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:13.351 01:59:58 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:13.351 01:59:58 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:25:13.351 01:59:58 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:13.351 01:59:58 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:13.351 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 01:59:58 -- nvmf/common.sh@469 -- # nvmfpid=2233889 00:25:13.351 01:59:58 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:25:13.351 01:59:58 -- nvmf/common.sh@470 -- # waitforlisten 2233889 00:25:13.351 01:59:58 -- common/autotest_common.sh@819 -- # '[' -z 2233889 ']' 00:25:13.351 01:59:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:13.351 01:59:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:13.351 01:59:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:13.351 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:13.351 01:59:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:13.351 01:59:58 -- common/autotest_common.sh@10 -- # set +x 00:25:13.351 [2024-04-15 01:59:58.917138] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:13.351 [2024-04-15 01:59:58.917229] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:13.351 EAL: No free 2048 kB hugepages reported on node 1 00:25:13.351 [2024-04-15 01:59:58.995827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:13.610 [2024-04-15 01:59:59.086866] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:13.610 [2024-04-15 01:59:59.087021] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:13.610 [2024-04-15 01:59:59.087037] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:13.610 [2024-04-15 01:59:59.087058] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:13.610 [2024-04-15 01:59:59.087106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:13.610 [2024-04-15 01:59:59.087167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:13.610 [2024-04-15 01:59:59.087234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:25:13.610 [2024-04-15 01:59:59.087237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.543 01:59:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:14.543 01:59:59 -- common/autotest_common.sh@852 -- # return 0 00:25:14.543 01:59:59 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:14.543 01:59:59 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:14.543 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:14.543 01:59:59 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.543 01:59:59 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.543 01:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.543 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:14.543 [2024-04-15 01:59:59.861550] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.543 01:59:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.543 01:59:59 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:25:14.543 01:59:59 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:25:14.543 01:59:59 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:14.543 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:14.543 01:59:59 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:25:14.543 01:59:59 -- target/shutdown.sh@28 -- # cat 00:25:14.543 01:59:59 -- target/shutdown.sh@35 -- # rpc_cmd 00:25:14.543 01:59:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:14.543 01:59:59 -- common/autotest_common.sh@10 -- # set +x 00:25:14.543 Malloc1 00:25:14.543 [2024-04-15 01:59:59.950891] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.543 Malloc2 00:25:14.543 Malloc3 00:25:14.543 Malloc4 00:25:14.543 Malloc5 00:25:14.543 Malloc6 00:25:14.802 Malloc7 00:25:14.802 Malloc8 00:25:14.802 Malloc9 00:25:14.802 Malloc10 00:25:14.802 02:00:00 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:14.802 02:00:00 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:25:14.802 02:00:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:14.802 02:00:00 -- common/autotest_common.sh@10 -- # set +x 00:25:14.802 02:00:00 -- target/shutdown.sh@124 -- # perfpid=2234111 00:25:14.802 02:00:00 -- target/shutdown.sh@125 -- # waitforlisten 2234111 /var/tmp/bdevperf.sock 00:25:14.802 02:00:00 -- common/autotest_common.sh@819 -- # '[' -z 2234111 ']' 00:25:14.802 02:00:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:14.802 02:00:00 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:25:14.802 02:00:00 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:25:14.802 02:00:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:14.802 02:00:00 -- nvmf/common.sh@520 -- # config=() 00:25:14.802 02:00:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:14.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:14.802 02:00:00 -- nvmf/common.sh@520 -- # local subsystem config 00:25:14.802 02:00:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- common/autotest_common.sh@10 -- # set +x 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.802 "params": { 00:25:14.802 "name": "Nvme$subsystem", 00:25:14.802 "trtype": "$TEST_TRANSPORT", 00:25:14.802 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.802 "adrfam": "ipv4", 00:25:14.802 "trsvcid": "$NVMF_PORT", 00:25:14.802 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.802 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.802 "hdgst": ${hdgst:-false}, 00:25:14.802 "ddgst": ${ddgst:-false} 00:25:14.802 }, 00:25:14.802 "method": "bdev_nvme_attach_controller" 00:25:14.802 } 00:25:14.802 EOF 00:25:14.802 )") 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.802 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.802 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.802 { 00:25:14.803 "params": { 00:25:14.803 "name": "Nvme$subsystem", 00:25:14.803 "trtype": "$TEST_TRANSPORT", 00:25:14.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.803 "adrfam": "ipv4", 00:25:14.803 "trsvcid": "$NVMF_PORT", 00:25:14.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.803 "hdgst": ${hdgst:-false}, 00:25:14.803 "ddgst": ${ddgst:-false} 00:25:14.803 }, 00:25:14.803 "method": "bdev_nvme_attach_controller" 00:25:14.803 } 00:25:14.803 EOF 00:25:14.803 )") 00:25:14.803 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.803 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.803 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.803 { 00:25:14.803 "params": { 00:25:14.803 "name": "Nvme$subsystem", 00:25:14.803 "trtype": "$TEST_TRANSPORT", 00:25:14.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.803 "adrfam": "ipv4", 00:25:14.803 "trsvcid": "$NVMF_PORT", 00:25:14.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.803 "hdgst": ${hdgst:-false}, 00:25:14.803 "ddgst": ${ddgst:-false} 00:25:14.803 }, 00:25:14.803 "method": "bdev_nvme_attach_controller" 00:25:14.803 } 00:25:14.803 EOF 00:25:14.803 )") 00:25:14.803 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:14.803 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:14.803 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:14.803 { 00:25:14.803 "params": { 00:25:14.803 "name": "Nvme$subsystem", 00:25:14.803 "trtype": "$TEST_TRANSPORT", 00:25:14.803 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:14.803 "adrfam": "ipv4", 00:25:14.803 "trsvcid": "$NVMF_PORT", 00:25:14.803 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:14.803 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:14.803 "hdgst": ${hdgst:-false}, 00:25:14.803 "ddgst": ${ddgst:-false} 00:25:14.803 }, 00:25:14.803 "method": "bdev_nvme_attach_controller" 00:25:14.803 } 00:25:14.803 EOF 00:25:14.803 )") 00:25:14.803 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:15.062 02:00:00 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:15.062 02:00:00 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:15.062 { 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme$subsystem", 00:25:15.062 "trtype": "$TEST_TRANSPORT", 00:25:15.062 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "$NVMF_PORT", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:15.062 "hdgst": ${hdgst:-false}, 00:25:15.062 "ddgst": ${ddgst:-false} 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 } 00:25:15.062 EOF 00:25:15.062 )") 00:25:15.062 02:00:00 -- nvmf/common.sh@542 -- # cat 00:25:15.062 02:00:00 -- nvmf/common.sh@544 -- # jq . 00:25:15.062 02:00:00 -- nvmf/common.sh@545 -- # IFS=, 00:25:15.062 02:00:00 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme1", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme2", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme3", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme4", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme5", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme6", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme7", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme8", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme9", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 },{ 00:25:15.062 "params": { 00:25:15.062 "name": "Nvme10", 00:25:15.062 "trtype": "tcp", 00:25:15.062 "traddr": "10.0.0.2", 00:25:15.062 "adrfam": "ipv4", 00:25:15.062 "trsvcid": "4420", 00:25:15.062 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:25:15.062 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:25:15.062 "hdgst": false, 00:25:15.062 "ddgst": false 00:25:15.062 }, 00:25:15.062 "method": "bdev_nvme_attach_controller" 00:25:15.062 }' 00:25:15.062 [2024-04-15 02:00:00.459234] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:15.062 [2024-04-15 02:00:00.459313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2234111 ] 00:25:15.062 EAL: No free 2048 kB hugepages reported on node 1 00:25:15.062 [2024-04-15 02:00:00.528220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:15.062 [2024-04-15 02:00:00.615526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.014 Running I/O for 10 seconds... 00:25:17.272 02:00:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.272 02:00:02 -- common/autotest_common.sh@852 -- # return 0 00:25:17.272 02:00:02 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:25:17.272 02:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.272 02:00:02 -- common/autotest_common.sh@10 -- # set +x 00:25:17.272 02:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.272 02:00:02 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:17.272 02:00:02 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:25:17.544 02:00:02 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:25:17.544 02:00:02 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:25:17.544 02:00:02 -- target/shutdown.sh@57 -- # local ret=1 00:25:17.544 02:00:02 -- target/shutdown.sh@58 -- # local i 00:25:17.544 02:00:02 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:25:17.544 02:00:02 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:25:17.544 02:00:02 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:25:17.544 02:00:02 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:25:17.544 02:00:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.544 02:00:02 -- common/autotest_common.sh@10 -- # set +x 00:25:17.544 02:00:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.544 02:00:02 -- target/shutdown.sh@60 -- # read_io_count=168 00:25:17.544 02:00:02 -- target/shutdown.sh@63 -- # '[' 168 -ge 100 ']' 00:25:17.544 02:00:02 -- target/shutdown.sh@64 -- # ret=0 00:25:17.544 02:00:02 -- target/shutdown.sh@65 -- # break 00:25:17.544 02:00:02 -- target/shutdown.sh@69 -- # return 0 00:25:17.544 02:00:02 -- target/shutdown.sh@134 -- # killprocess 2233889 00:25:17.544 02:00:02 -- common/autotest_common.sh@926 -- # '[' -z 2233889 ']' 00:25:17.545 02:00:02 -- common/autotest_common.sh@930 -- # kill -0 2233889 00:25:17.545 02:00:02 -- common/autotest_common.sh@931 -- # uname 00:25:17.545 02:00:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:17.545 02:00:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2233889 00:25:17.545 02:00:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:17.545 02:00:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:17.545 02:00:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2233889' 00:25:17.545 killing process with pid 2233889 00:25:17.545 02:00:02 -- common/autotest_common.sh@945 -- # kill 2233889 00:25:17.545 02:00:02 -- common/autotest_common.sh@950 -- # wait 2233889 00:25:17.545 [2024-04-15 02:00:02.985367] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985470] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985484] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985509] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985521] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985571] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985596] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985608] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985633] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985645] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985694] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985706] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985718] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985745] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985759] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985795] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985806] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985818] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985904] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985928] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985940] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.985990] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986014] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986068] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986119] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986179] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.545 [2024-04-15 02:00:02.986191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.986203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x225ce20 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988586] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988601] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988614] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988676] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988689] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988771] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988884] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.988992] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989028] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989041] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989211] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989253] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989318] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989463] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989475] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.989547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20125a0 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991895] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.991995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.992008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.992022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.992036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.992057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.546 [2024-04-15 02:00:02.992077] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992110] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992241] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992283] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992334] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992387] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992418] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992468] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992493] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992505] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992518] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.992582] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010080 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994311] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994355] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994383] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994428] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994452] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994531] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994569] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994581] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994593] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994605] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994617] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994630] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994654] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994679] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994703] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994715] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994727] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994764] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994802] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994827] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.547 [2024-04-15 02:00:02.994919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994943] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994979] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.994991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995039] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995074] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995101] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995125] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20109c0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.995923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2010e70 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.996763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.996807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.996826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.996840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.996862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.996879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.996894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.996908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.996922] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18970a0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.996978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dba0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997273] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e91f0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997322] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997387] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c7eb0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-15 02:00:02.997567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:17.548 the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997611] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 ns[2024-04-15 02:00:02.997652] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:17.548 the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-04-15 02:00:02.997667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with [2024-04-15 02:00:02.997684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsthe state(5) to be set 00:25:17.548 id:0 cdw10:00000000 cdw11:00000000 00:25:17.548 [2024-04-15 02:00:02.997698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.548 [2024-04-15 02:00:02.997711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.548 [2024-04-15 02:00:02.997714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c9f0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997723] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997780] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997779] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.997793] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.997805] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 ns[2024-04-15 02:00:02.997819] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:17.549 the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with [2024-04-15 02:00:02.997833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cthe state(5) to be set 00:25:17.549 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.997846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.997859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.997872] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.997885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.997897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997906] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878530 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997949] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-15 02:00:02.997960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:17.549 the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.997986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.997992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.998000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.998013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.998026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.998040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.998062] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.998076] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with [2024-04-15 02:00:02.998094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae40 is same the state(5) to be set 00:25:17.549 with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-04-15 02:00:02.998146] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with id:0 cdw10:00000000 cdw11:00000000 00:25:17.549 the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.998173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.998190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.549 [2024-04-15 02:00:02.998203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998207] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.549 [2024-04-15 02:00:02.998217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.549 [2024-04-15 02:00:02.998221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.550 [2024-04-15 02:00:02.998243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998264] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b0850 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998269] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998294] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998306] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998319] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998376] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998401] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20117b0 is same with the state(5) to be set 00:25:17.550 [2024-04-15 02:00:02.998452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.998975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.998989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.550 [2024-04-15 02:00:02.999226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.550 [2024-04-15 02:00:02.999242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999526] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:02.999527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:17.551 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999562] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:02.999564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:17.551 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999606] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999619] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:02.999659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999713] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:02.999755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999770] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999796] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 [2024-04-15 02:00:02.999810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 [2024-04-15 02:00:02.999823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26752 len:12[2024-04-15 02:00:02.999836] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.551 the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:02.999852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.551 the state(5) to be set 00:25:17.551 [2024-04-15 02:00:02.999867] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:02.999879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:02.999893] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:02.999906] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:02.999919] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:27136 len:12[2024-04-15 02:00:02.999932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:02.999951] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:02.999980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:02.999993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:02.999999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:03.000021] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000038] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27520 len:1the state(5) to be set 00:25:17.552 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:17.552 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27648 len:12the state(5) to be set 00:25:17.552 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:25:17.552 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:27776 len:1[2024-04-15 02:00:03.000131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:03.000148] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000177] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000191] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000217] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:12the state(5) to be set 00:25:17.552 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:1[2024-04-15 02:00:03.000265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-04-15 02:00:03.000281] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000297] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000392] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000405] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with [2024-04-15 02:00:03.000406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28800 len:1the state(5) to be set 00:25:17.552 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.552 [2024-04-15 02:00:03.000420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2011c40 is same with the state(5) to be set 00:25:17.552 [2024-04-15 02:00:03.000422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.552 [2024-04-15 02:00:03.000438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-04-15 02:00:03.000453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-04-15 02:00:03.000468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-04-15 02:00:03.000482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-04-15 02:00:03.000594] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x185a730 was disconnected and freed. reset controller. 00:25:17.553 [2024-04-15 02:00:03.001158] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001236] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001286] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001360] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001411] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001436] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001461] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001510] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001530] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001557] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001570] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001595] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001620] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001644] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001657] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001669] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001681] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001737] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001762] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001846] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001859] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.001995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.002007] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20120f0 is same with the state(5) to be set 00:25:17.553 [2024-04-15 02:00:03.002919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-04-15 02:00:03.002946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-04-15 02:00:03.002968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-04-15 02:00:03.002984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-04-15 02:00:03.003001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.553 [2024-04-15 02:00:03.003025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.553 [2024-04-15 02:00:03.003042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.003977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.554 [2024-04-15 02:00:03.003991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.554 [2024-04-15 02:00:03.004006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.555 [2024-04-15 02:00:03.004784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.555 [2024-04-15 02:00:03.004803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.004819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.004833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.004848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.004864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.004879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.004893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.004908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.004922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005009] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x199caf0 was disconnected and freed. reset controller. 00:25:17.556 [2024-04-15 02:00:03.005191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.005720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.005747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.024937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.556 [2024-04-15 02:00:03.025501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.556 [2024-04-15 02:00:03.025518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.025979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.025993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.557 [2024-04-15 02:00:03.026692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.557 [2024-04-15 02:00:03.026821] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19169d0 was disconnected and freed. reset controller. 00:25:17.557 [2024-04-15 02:00:03.027232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:17.558 [2024-04-15 02:00:03.027296] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878530 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027372] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18970a0 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027401] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dba0 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e91f0 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027454] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c7eb0 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188c9f0 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b0420 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.027710] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:17.558 [2024-04-15 02:00:03.027818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.027836] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879950 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.027868] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ae40 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.027897] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0850 (9): Bad file descriptor 00:25:17.558 [2024-04-15 02:00:03.031713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:17.558 [2024-04-15 02:00:03.031755] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:17.558 [2024-04-15 02:00:03.032345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.032973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.032987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.033017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.033055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.033088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.033118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.558 [2024-04-15 02:00:03.033147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.558 [2024-04-15 02:00:03.033162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19153f0 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.033239] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19153f0 was disconnected and freed. reset controller. 00:25:17.558 [2024-04-15 02:00:03.033482] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.558 [2024-04-15 02:00:03.033873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1878530 with addr=10.0.0.2, port=4420 00:25:17.558 [2024-04-15 02:00:03.034132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878530 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.034363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18970a0 with addr=10.0.0.2, port=4420 00:25:17.558 [2024-04-15 02:00:03.034599] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18970a0 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.034789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.558 [2024-04-15 02:00:03.034998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e91f0 with addr=10.0.0.2, port=4420 00:25:17.558 [2024-04-15 02:00:03.035013] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e91f0 is same with the state(5) to be set 00:25:17.558 [2024-04-15 02:00:03.035372] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-04-15 02:00:03.036688] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-04-15 02:00:03.036765] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.559 [2024-04-15 02:00:03.036873] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:17.559 [2024-04-15 02:00:03.036934] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878530 (9): Bad file descriptor 00:25:17.559 [2024-04-15 02:00:03.036962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18970a0 (9): Bad file descriptor 00:25:17.559 [2024-04-15 02:00:03.036981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e91f0 (9): Bad file descriptor 00:25:17.559 [2024-04-15 02:00:03.037059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.037976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.037992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.038008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.038022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.038039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.038065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.038084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.038098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.559 [2024-04-15 02:00:03.038115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.559 [2024-04-15 02:00:03.038129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.038984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.038999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.039014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.039030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.560 [2024-04-15 02:00:03.039049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.560 [2024-04-15 02:00:03.039066] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18653a0 is same with the state(5) to be set 00:25:17.560 [2024-04-15 02:00:03.039154] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18653a0 was disconnected and freed. reset controller. 00:25:17.560 [2024-04-15 02:00:03.039339] nvme_tcp.c:1159:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:25:17.560 [2024-04-15 02:00:03.039590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.560 [2024-04-15 02:00:03.039791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.561 [2024-04-15 02:00:03.039816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188c9f0 with addr=10.0.0.2, port=4420 00:25:17.561 [2024-04-15 02:00:03.039832] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c9f0 is same with the state(5) to be set 00:25:17.561 [2024-04-15 02:00:03.039848] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:17.561 [2024-04-15 02:00:03.039862] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:17.561 [2024-04-15 02:00:03.039877] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:17.561 [2024-04-15 02:00:03.039899] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:17.561 [2024-04-15 02:00:03.039912] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:17.561 [2024-04-15 02:00:03.039926] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:17.561 [2024-04-15 02:00:03.039949] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:17.561 [2024-04-15 02:00:03.039964] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:17.561 [2024-04-15 02:00:03.039977] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:17.561 [2024-04-15 02:00:03.040026] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0420 (9): Bad file descriptor 00:25:17.561 [2024-04-15 02:00:03.040074] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879950 (9): Bad file descriptor 00:25:17.561 [2024-04-15 02:00:03.040109] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.561 [2024-04-15 02:00:03.041562] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.561 [2024-04-15 02:00:03.041587] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.561 [2024-04-15 02:00:03.041599] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.561 [2024-04-15 02:00:03.041625] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.561 [2024-04-15 02:00:03.041658] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188c9f0 (9): Bad file descriptor 00:25:17.561 [2024-04-15 02:00:03.041754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.041983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.041997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.561 [2024-04-15 02:00:03.042710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.561 [2024-04-15 02:00:03.042725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.042983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.042999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.043727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.043742] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199e030 is same with the state(5) to be set 00:25:17.562 [2024-04-15 02:00:03.045002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.562 [2024-04-15 02:00:03.045213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.562 [2024-04-15 02:00:03.045228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.045970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.045985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.563 [2024-04-15 02:00:03.046440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.563 [2024-04-15 02:00:03.046454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.046968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.046983] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1917fb0 is same with the state(5) to be set 00:25:17.564 [2024-04-15 02:00:03.048207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.564 [2024-04-15 02:00:03.048682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.564 [2024-04-15 02:00:03.048697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.048972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.048988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.565 [2024-04-15 02:00:03.049926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.565 [2024-04-15 02:00:03.049941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.049957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.049971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.049986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.050191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.050206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2a75700 is same with the state(5) to be set 00:25:17.566 [2024-04-15 02:00:03.052492] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:25:17.566 [2024-04-15 02:00:03.052524] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:25:17.566 [2024-04-15 02:00:03.052545] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:25:17.566 [2024-04-15 02:00:03.052916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.053295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.053320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186ae40 with addr=10.0.0.2, port=4420 00:25:17.566 [2024-04-15 02:00:03.053342] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae40 is same with the state(5) to be set 00:25:17.566 [2024-04-15 02:00:03.053360] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:17.566 [2024-04-15 02:00:03.053373] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:17.566 [2024-04-15 02:00:03.053389] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:17.566 [2024-04-15 02:00:03.053464] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.566 [2024-04-15 02:00:03.053515] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ae40 (9): Bad file descriptor 00:25:17.566 [2024-04-15 02:00:03.053887] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.566 [2024-04-15 02:00:03.054123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.054326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.054351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186dba0 with addr=10.0.0.2, port=4420 00:25:17.566 [2024-04-15 02:00:03.054367] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186dba0 is same with the state(5) to be set 00:25:17.566 [2024-04-15 02:00:03.054566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.054753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.054777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b0850 with addr=10.0.0.2, port=4420 00:25:17.566 [2024-04-15 02:00:03.054792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b0850 is same with the state(5) to be set 00:25:17.566 [2024-04-15 02:00:03.054984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.055172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.566 [2024-04-15 02:00:03.055196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17c7eb0 with addr=10.0.0.2, port=4420 00:25:17.566 [2024-04-15 02:00:03.055213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17c7eb0 is same with the state(5) to be set 00:25:17.566 [2024-04-15 02:00:03.055792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.055841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.055875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.055938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.055974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.055988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.566 [2024-04-15 02:00:03.056442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.566 [2024-04-15 02:00:03.056456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.056971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.056987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.567 [2024-04-15 02:00:03.057661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.567 [2024-04-15 02:00:03.057675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.057690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.057704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.057720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.057734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.057750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.057764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.057780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1919590 is same with the state(5) to be set 00:25:17.568 [2024-04-15 02:00:03.059014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.059976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.059992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.060005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.060021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.060035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.060058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.060075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.060092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.060106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.060122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.568 [2024-04-15 02:00:03.060136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.568 [2024-04-15 02:00:03.060152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.569 [2024-04-15 02:00:03.060981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:17.569 [2024-04-15 02:00:03.060996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x28d2bb0 is same with the state(5) to be set 00:25:17.569 [2024-04-15 02:00:03.062800] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:25:17.569 [2024-04-15 02:00:03.062833] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:25:17.569 [2024-04-15 02:00:03.062852] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:25:17.569 [2024-04-15 02:00:03.062878] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:25:17.569 task offset: 24320 on job bdev=Nvme2n1 fails 00:25:17.569 00:25:17.569 Latency(us) 00:25:17.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.569 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.569 Job: Nvme1n1 ended in about 0.62 seconds with error 00:25:17.569 Verification LBA range: start 0x0 length 0x400 00:25:17.569 Nvme1n1 : 0.62 334.89 20.93 103.04 0.00 144991.08 17087.91 125052.40 00:25:17.569 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.569 Job: Nvme2n1 ended in about 0.58 seconds with error 00:25:17.569 Verification LBA range: start 0x0 length 0x400 00:25:17.569 Nvme2n1 : 0.58 281.37 17.59 109.80 0.00 160114.37 40001.23 157674.76 00:25:17.569 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.569 Job: Nvme3n1 ended in about 0.61 seconds with error 00:25:17.569 Verification LBA range: start 0x0 length 0x400 00:25:17.569 Nvme3n1 : 0.61 341.52 21.34 105.08 0.00 138562.24 27185.30 115731.72 00:25:17.569 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme4n1 ended in about 0.62 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme4n1 : 0.62 262.52 16.41 102.45 0.00 167560.95 104857.60 135926.52 00:25:17.570 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme5n1 ended in about 0.62 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme5n1 : 0.62 337.54 21.10 32.46 0.00 161899.47 5485.61 165441.99 00:25:17.570 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme6n1 ended in about 0.61 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme6n1 : 0.61 268.27 16.77 104.69 0.00 159465.82 29127.11 156898.04 00:25:17.570 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme7n1 ended in about 0.63 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme7n1 : 0.63 331.24 20.70 101.92 0.00 135785.88 51652.08 126605.84 00:25:17.570 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme8n1 ended in about 0.64 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme8n1 : 0.64 256.75 16.05 100.20 0.00 163068.39 97478.73 127382.57 00:25:17.570 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme9n1 ended in about 0.64 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme9n1 : 0.64 255.48 15.97 99.70 0.00 161973.57 12718.84 140586.86 00:25:17.570 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:25:17.570 Job: Nvme10n1 ended in about 0.63 seconds with error 00:25:17.570 Verification LBA range: start 0x0 length 0x400 00:25:17.570 Nvme10n1 : 0.63 199.64 12.48 101.40 0.00 188420.73 115731.72 163888.55 00:25:17.570 =================================================================================================================== 00:25:17.570 Total : 2869.22 179.33 960.74 0.00 156676.93 5485.61 165441.99 00:25:17.570 [2024-04-15 02:00:03.090379] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:25:17.570 [2024-04-15 02:00:03.090475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:25:17.570 [2024-04-15 02:00:03.090586] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186dba0 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.090617] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0850 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.090651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17c7eb0 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.090669] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.090684] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.090700] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.570 [2024-04-15 02:00:03.090782] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.090809] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.090829] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.090848] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.090986] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.570 [2024-04-15 02:00:03.091405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.091637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.091664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19e91f0 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.091684] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19e91f0 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.091908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18970a0 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.092158] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18970a0 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.092347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1878530 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.092586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1878530 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.092780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.092995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19b0420 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.093011] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19b0420 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.093251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.093449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.093475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1879950 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.093491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1879950 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.093506] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.093519] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.093538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:25:17.570 [2024-04-15 02:00:03.093558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.093573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.093586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:25:17.570 [2024-04-15 02:00:03.093603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.093617] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.093630] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:25:17.570 [2024-04-15 02:00:03.093664] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.093693] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.093714] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.093732] bdev_nvme.c:2824:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:25:17.570 [2024-04-15 02:00:03.094308] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:25:17.570 [2024-04-15 02:00:03.094358] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.570 [2024-04-15 02:00:03.094385] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.570 [2024-04-15 02:00:03.094397] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.570 [2024-04-15 02:00:03.094426] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19e91f0 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.094450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18970a0 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.094468] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1878530 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.094486] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b0420 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.094503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1879950 (9): Bad file descriptor 00:25:17.570 [2024-04-15 02:00:03.094563] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:25:17.570 [2024-04-15 02:00:03.094786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.094987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.570 [2024-04-15 02:00:03.095014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x188c9f0 with addr=10.0.0.2, port=4420 00:25:17.570 [2024-04-15 02:00:03.095031] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188c9f0 is same with the state(5) to be set 00:25:17.570 [2024-04-15 02:00:03.095060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.095074] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.095088] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:25:17.570 [2024-04-15 02:00:03.095106] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.095121] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.095134] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:25:17.570 [2024-04-15 02:00:03.095156] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.095171] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.095185] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:17.570 [2024-04-15 02:00:03.095201] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.095215] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:25:17.570 [2024-04-15 02:00:03.095228] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:25:17.570 [2024-04-15 02:00:03.095244] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:25:17.570 [2024-04-15 02:00:03.095259] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:25:17.571 [2024-04-15 02:00:03.095272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:25:17.571 [2024-04-15 02:00:03.095327] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095347] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095360] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095371] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095383] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-04-15 02:00:03.095755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:17.571 [2024-04-15 02:00:03.095779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x186ae40 with addr=10.0.0.2, port=4420 00:25:17.571 [2024-04-15 02:00:03.095795] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x186ae40 is same with the state(5) to be set 00:25:17.571 [2024-04-15 02:00:03.095814] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188c9f0 (9): Bad file descriptor 00:25:17.571 [2024-04-15 02:00:03.095871] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x186ae40 (9): Bad file descriptor 00:25:17.571 [2024-04-15 02:00:03.095895] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:25:17.571 [2024-04-15 02:00:03.095908] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:25:17.571 [2024-04-15 02:00:03.095922] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:25:17.571 [2024-04-15 02:00:03.095958] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:17.571 [2024-04-15 02:00:03.095977] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:17.571 [2024-04-15 02:00:03.095990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:25:17.571 [2024-04-15 02:00:03.096004] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:17.571 [2024-04-15 02:00:03.096039] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:18.138 02:00:03 -- target/shutdown.sh@135 -- # nvmfpid= 00:25:18.138 02:00:03 -- target/shutdown.sh@138 -- # sleep 1 00:25:19.073 02:00:04 -- target/shutdown.sh@141 -- # kill -9 2234111 00:25:19.073 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 141: kill: (2234111) - No such process 00:25:19.073 02:00:04 -- target/shutdown.sh@141 -- # true 00:25:19.073 02:00:04 -- target/shutdown.sh@143 -- # stoptarget 00:25:19.073 02:00:04 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:25:19.073 02:00:04 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:25:19.073 02:00:04 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:25:19.073 02:00:04 -- target/shutdown.sh@45 -- # nvmftestfini 00:25:19.073 02:00:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:19.073 02:00:04 -- nvmf/common.sh@116 -- # sync 00:25:19.073 02:00:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:19.073 02:00:04 -- nvmf/common.sh@119 -- # set +e 00:25:19.073 02:00:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:19.073 02:00:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:19.073 rmmod nvme_tcp 00:25:19.073 rmmod nvme_fabrics 00:25:19.073 rmmod nvme_keyring 00:25:19.073 02:00:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:19.073 02:00:04 -- nvmf/common.sh@123 -- # set -e 00:25:19.073 02:00:04 -- nvmf/common.sh@124 -- # return 0 00:25:19.073 02:00:04 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:25:19.073 02:00:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:19.073 02:00:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:19.073 02:00:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:19.073 02:00:04 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:19.073 02:00:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:19.073 02:00:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.073 02:00:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.073 02:00:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.603 02:00:06 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:21.603 00:25:21.603 real 0m7.980s 00:25:21.603 user 0m20.239s 00:25:21.603 sys 0m1.540s 00:25:21.603 02:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 ************************************ 00:25:21.603 END TEST nvmf_shutdown_tc3 00:25:21.603 ************************************ 00:25:21.603 02:00:06 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:25:21.603 00:25:21.603 real 0m27.902s 00:25:21.603 user 1m18.918s 00:25:21.603 sys 0m6.333s 00:25:21.603 02:00:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 ************************************ 00:25:21.603 END TEST nvmf_shutdown 00:25:21.603 ************************************ 00:25:21.603 02:00:06 -- nvmf/nvmf.sh@85 -- # timing_exit target 00:25:21.603 02:00:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 02:00:06 -- nvmf/nvmf.sh@87 -- # timing_enter host 00:25:21.603 02:00:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 02:00:06 -- nvmf/nvmf.sh@89 -- # [[ 0 -eq 0 ]] 00:25:21.603 02:00:06 -- nvmf/nvmf.sh@90 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:21.603 02:00:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:21.603 02:00:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:21.603 ************************************ 00:25:21.603 START TEST nvmf_multicontroller 00:25:21.603 ************************************ 00:25:21.603 02:00:06 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:25:21.603 * Looking for test storage... 00:25:21.603 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:21.603 02:00:06 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:21.603 02:00:06 -- nvmf/common.sh@7 -- # uname -s 00:25:21.603 02:00:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:21.603 02:00:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:21.603 02:00:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:21.603 02:00:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:21.603 02:00:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:21.603 02:00:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:21.603 02:00:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:21.603 02:00:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:21.603 02:00:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:21.603 02:00:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:21.603 02:00:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.603 02:00:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:21.603 02:00:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:21.603 02:00:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:21.603 02:00:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:21.603 02:00:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:21.603 02:00:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:21.603 02:00:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:21.603 02:00:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:21.603 02:00:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.603 02:00:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.603 02:00:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.603 02:00:06 -- paths/export.sh@5 -- # export PATH 00:25:21.603 02:00:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:21.603 02:00:06 -- nvmf/common.sh@46 -- # : 0 00:25:21.603 02:00:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:21.603 02:00:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:21.603 02:00:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:21.603 02:00:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:21.603 02:00:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:21.603 02:00:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:21.603 02:00:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:21.603 02:00:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:21.603 02:00:06 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:21.603 02:00:06 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:21.603 02:00:06 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:25:21.603 02:00:06 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:25:21.603 02:00:06 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:21.603 02:00:06 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:25:21.603 02:00:06 -- host/multicontroller.sh@23 -- # nvmftestinit 00:25:21.603 02:00:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:21.603 02:00:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:21.603 02:00:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:21.603 02:00:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:21.603 02:00:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:21.603 02:00:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:21.603 02:00:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:21.603 02:00:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:21.603 02:00:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:21.603 02:00:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:21.603 02:00:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:21.603 02:00:06 -- common/autotest_common.sh@10 -- # set +x 00:25:23.502 02:00:08 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:23.502 02:00:08 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:23.502 02:00:08 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:23.502 02:00:08 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:23.502 02:00:08 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:23.502 02:00:08 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:23.502 02:00:08 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:23.502 02:00:08 -- nvmf/common.sh@294 -- # net_devs=() 00:25:23.502 02:00:08 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:23.502 02:00:08 -- nvmf/common.sh@295 -- # e810=() 00:25:23.502 02:00:08 -- nvmf/common.sh@295 -- # local -ga e810 00:25:23.502 02:00:08 -- nvmf/common.sh@296 -- # x722=() 00:25:23.502 02:00:08 -- nvmf/common.sh@296 -- # local -ga x722 00:25:23.502 02:00:08 -- nvmf/common.sh@297 -- # mlx=() 00:25:23.502 02:00:08 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:23.502 02:00:08 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.502 02:00:08 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:23.502 02:00:08 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:23.502 02:00:08 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:23.502 02:00:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.502 02:00:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:23.502 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:23.502 02:00:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:23.502 02:00:08 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:23.502 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:23.502 02:00:08 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:23.502 02:00:08 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.502 02:00:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.502 02:00:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.502 02:00:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.502 02:00:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:23.502 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:23.502 02:00:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.502 02:00:08 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:23.502 02:00:08 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.502 02:00:08 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:23.502 02:00:08 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.502 02:00:08 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:23.502 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:23.502 02:00:08 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.502 02:00:08 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:23.502 02:00:08 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:23.502 02:00:08 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:23.502 02:00:08 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:23.503 02:00:08 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:23.503 02:00:08 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.503 02:00:08 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.503 02:00:08 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:23.503 02:00:08 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:23.503 02:00:08 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:23.503 02:00:08 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:23.503 02:00:08 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:23.503 02:00:08 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:23.503 02:00:08 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.503 02:00:08 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:23.503 02:00:08 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:23.503 02:00:08 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:23.503 02:00:08 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:23.503 02:00:08 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:23.503 02:00:08 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:23.503 02:00:08 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:23.503 02:00:08 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:23.503 02:00:08 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:23.503 02:00:08 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:23.503 02:00:08 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:23.503 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.503 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:25:23.503 00:25:23.503 --- 10.0.0.2 ping statistics --- 00:25:23.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.503 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:25:23.503 02:00:08 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:23.503 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.503 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.086 ms 00:25:23.503 00:25:23.503 --- 10.0.0.1 ping statistics --- 00:25:23.503 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.503 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:25:23.503 02:00:08 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.503 02:00:08 -- nvmf/common.sh@410 -- # return 0 00:25:23.503 02:00:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:23.503 02:00:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.503 02:00:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:23.503 02:00:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:23.503 02:00:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.503 02:00:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:23.503 02:00:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:23.503 02:00:08 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:25:23.503 02:00:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:23.503 02:00:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:23.503 02:00:08 -- common/autotest_common.sh@10 -- # set +x 00:25:23.503 02:00:08 -- nvmf/common.sh@469 -- # nvmfpid=2237137 00:25:23.503 02:00:08 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:23.503 02:00:08 -- nvmf/common.sh@470 -- # waitforlisten 2237137 00:25:23.503 02:00:08 -- common/autotest_common.sh@819 -- # '[' -z 2237137 ']' 00:25:23.503 02:00:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.503 02:00:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:23.503 02:00:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.503 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.503 02:00:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:23.503 02:00:08 -- common/autotest_common.sh@10 -- # set +x 00:25:23.503 [2024-04-15 02:00:08.866289] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:23.503 [2024-04-15 02:00:08.866380] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.503 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.503 [2024-04-15 02:00:08.936239] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:23.503 [2024-04-15 02:00:09.027578] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:23.503 [2024-04-15 02:00:09.027753] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.503 [2024-04-15 02:00:09.027772] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.503 [2024-04-15 02:00:09.027786] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.503 [2024-04-15 02:00:09.027879] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.503 [2024-04-15 02:00:09.027969] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:23.503 [2024-04-15 02:00:09.027971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:24.444 02:00:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:24.444 02:00:09 -- common/autotest_common.sh@852 -- # return 0 00:25:24.444 02:00:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:24.444 02:00:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 02:00:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.444 02:00:09 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 [2024-04-15 02:00:09.829536] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.444 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.444 02:00:09 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 Malloc0 00:25:24.444 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.444 02:00:09 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.444 02:00:09 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.444 02:00:09 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 [2024-04-15 02:00:09.891575] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.444 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.444 02:00:09 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:24.444 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.444 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.444 [2024-04-15 02:00:09.899480] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:24.445 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 Malloc1 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:25:24.445 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:25:24.445 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:24.445 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:25:24.445 02:00:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:24.445 02:00:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:24.445 02:00:09 -- host/multicontroller.sh@44 -- # bdevperf_pid=2237401 00:25:24.445 02:00:09 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:25:24.445 02:00:09 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:24.445 02:00:09 -- host/multicontroller.sh@47 -- # waitforlisten 2237401 /var/tmp/bdevperf.sock 00:25:24.445 02:00:09 -- common/autotest_common.sh@819 -- # '[' -z 2237401 ']' 00:25:24.445 02:00:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:24.445 02:00:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:24.445 02:00:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:24.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:24.445 02:00:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:24.445 02:00:09 -- common/autotest_common.sh@10 -- # set +x 00:25:25.376 02:00:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:25.376 02:00:10 -- common/autotest_common.sh@852 -- # return 0 00:25:25.376 02:00:10 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:25.376 02:00:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.376 02:00:10 -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 NVMe0n1 00:25:25.634 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.634 02:00:11 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.634 02:00:11 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:25:25.634 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.634 1 00:25:25.634 02:00:11 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:25.634 02:00:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.634 02:00:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:25.634 02:00:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:25:25.634 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 request: 00:25:25.634 { 00:25:25.634 "name": "NVMe0", 00:25:25.634 "trtype": "tcp", 00:25:25.634 "traddr": "10.0.0.2", 00:25:25.634 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:25:25.634 "hostaddr": "10.0.0.2", 00:25:25.634 "hostsvcid": "60000", 00:25:25.634 "adrfam": "ipv4", 00:25:25.634 "trsvcid": "4420", 00:25:25.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.634 "method": "bdev_nvme_attach_controller", 00:25:25.634 "req_id": 1 00:25:25.634 } 00:25:25.634 Got JSON-RPC error response 00:25:25.634 response: 00:25:25.634 { 00:25:25.634 "code": -114, 00:25:25.634 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:25.634 } 00:25:25.634 02:00:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # es=1 00:25:25.634 02:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.634 02:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.634 02:00:11 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:25.634 02:00:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.634 02:00:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:25.634 02:00:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:25:25.634 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 request: 00:25:25.634 { 00:25:25.634 "name": "NVMe0", 00:25:25.634 "trtype": "tcp", 00:25:25.634 "traddr": "10.0.0.2", 00:25:25.634 "hostaddr": "10.0.0.2", 00:25:25.634 "hostsvcid": "60000", 00:25:25.634 "adrfam": "ipv4", 00:25:25.634 "trsvcid": "4420", 00:25:25.634 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:25.634 "method": "bdev_nvme_attach_controller", 00:25:25.634 "req_id": 1 00:25:25.634 } 00:25:25.634 Got JSON-RPC error response 00:25:25.634 response: 00:25:25.634 { 00:25:25.634 "code": -114, 00:25:25.634 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:25.634 } 00:25:25.634 02:00:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # es=1 00:25:25.634 02:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.634 02:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.634 02:00:11 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.634 02:00:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:25.634 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.634 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.634 request: 00:25:25.634 { 00:25:25.634 "name": "NVMe0", 00:25:25.634 "trtype": "tcp", 00:25:25.634 "traddr": "10.0.0.2", 00:25:25.634 "hostaddr": "10.0.0.2", 00:25:25.634 "hostsvcid": "60000", 00:25:25.634 "adrfam": "ipv4", 00:25:25.634 "trsvcid": "4420", 00:25:25.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.634 "multipath": "disable", 00:25:25.634 "method": "bdev_nvme_attach_controller", 00:25:25.634 "req_id": 1 00:25:25.634 } 00:25:25.634 Got JSON-RPC error response 00:25:25.634 response: 00:25:25.634 { 00:25:25.634 "code": -114, 00:25:25.634 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:25:25.634 } 00:25:25.634 02:00:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@643 -- # es=1 00:25:25.634 02:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.634 02:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:25.634 02:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.634 02:00:11 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:25.634 02:00:11 -- common/autotest_common.sh@640 -- # local es=0 00:25:25.634 02:00:11 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:25.634 02:00:11 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:25:25.635 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.635 02:00:11 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:25:25.635 02:00:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:25:25.635 02:00:11 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:25:25.635 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.635 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.635 request: 00:25:25.635 { 00:25:25.635 "name": "NVMe0", 00:25:25.635 "trtype": "tcp", 00:25:25.635 "traddr": "10.0.0.2", 00:25:25.635 "hostaddr": "10.0.0.2", 00:25:25.635 "hostsvcid": "60000", 00:25:25.635 "adrfam": "ipv4", 00:25:25.635 "trsvcid": "4420", 00:25:25.635 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.635 "multipath": "failover", 00:25:25.635 "method": "bdev_nvme_attach_controller", 00:25:25.635 "req_id": 1 00:25:25.635 } 00:25:25.635 Got JSON-RPC error response 00:25:25.635 response: 00:25:25.635 { 00:25:25.635 "code": -114, 00:25:25.635 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:25:25.635 } 00:25:25.635 02:00:11 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:25:25.635 02:00:11 -- common/autotest_common.sh@643 -- # es=1 00:25:25.635 02:00:11 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:25:25.635 02:00:11 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:25:25.635 02:00:11 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:25:25.635 02:00:11 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.635 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.635 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.892 00:25:25.892 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.892 02:00:11 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:25.892 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.892 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.892 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.892 02:00:11 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:25:25.892 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.892 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.892 00:25:25.892 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.892 02:00:11 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:25.892 02:00:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:25.892 02:00:11 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:25:25.892 02:00:11 -- common/autotest_common.sh@10 -- # set +x 00:25:25.892 02:00:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:25.892 02:00:11 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:25:25.892 02:00:11 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:27.265 0 00:25:27.265 02:00:12 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:25:27.265 02:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.265 02:00:12 -- common/autotest_common.sh@10 -- # set +x 00:25:27.265 02:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.265 02:00:12 -- host/multicontroller.sh@100 -- # killprocess 2237401 00:25:27.265 02:00:12 -- common/autotest_common.sh@926 -- # '[' -z 2237401 ']' 00:25:27.265 02:00:12 -- common/autotest_common.sh@930 -- # kill -0 2237401 00:25:27.265 02:00:12 -- common/autotest_common.sh@931 -- # uname 00:25:27.265 02:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.265 02:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2237401 00:25:27.265 02:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:27.265 02:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:27.265 02:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2237401' 00:25:27.265 killing process with pid 2237401 00:25:27.265 02:00:12 -- common/autotest_common.sh@945 -- # kill 2237401 00:25:27.265 02:00:12 -- common/autotest_common.sh@950 -- # wait 2237401 00:25:27.265 02:00:12 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:27.265 02:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.265 02:00:12 -- common/autotest_common.sh@10 -- # set +x 00:25:27.265 02:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.265 02:00:12 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:27.265 02:00:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:27.265 02:00:12 -- common/autotest_common.sh@10 -- # set +x 00:25:27.265 02:00:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:27.265 02:00:12 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:25:27.265 02:00:12 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.265 02:00:12 -- common/autotest_common.sh@1597 -- # read -r file 00:25:27.265 02:00:12 -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:25:27.265 02:00:12 -- common/autotest_common.sh@1596 -- # sort -u 00:25:27.265 02:00:12 -- common/autotest_common.sh@1598 -- # cat 00:25:27.265 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:27.265 [2024-04-15 02:00:09.995637] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:27.265 [2024-04-15 02:00:09.995736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237401 ] 00:25:27.265 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.265 [2024-04-15 02:00:10.062730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.265 [2024-04-15 02:00:10.148211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.265 [2024-04-15 02:00:11.379681] bdev.c:4548:bdev_name_add: *ERROR*: Bdev name 091d64d0-f0c1-4e67-ae3b-e8e833289d7c already exists 00:25:27.265 [2024-04-15 02:00:11.379719] bdev.c:7598:bdev_register: *ERROR*: Unable to add uuid:091d64d0-f0c1-4e67-ae3b-e8e833289d7c alias for bdev NVMe1n1 00:25:27.265 [2024-04-15 02:00:11.379745] bdev_nvme.c:4183:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:25:27.265 Running I/O for 1 seconds... 00:25:27.265 00:25:27.265 Latency(us) 00:25:27.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.265 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:25:27.265 NVMe0n1 : 1.01 19960.13 77.97 0.00 0.00 6397.32 4805.97 16408.27 00:25:27.265 =================================================================================================================== 00:25:27.265 Total : 19960.13 77.97 0.00 0.00 6397.32 4805.97 16408.27 00:25:27.265 Received shutdown signal, test time was about 1.000000 seconds 00:25:27.265 00:25:27.265 Latency(us) 00:25:27.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:27.265 =================================================================================================================== 00:25:27.265 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:27.265 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:25:27.265 02:00:12 -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:27.265 02:00:12 -- common/autotest_common.sh@1597 -- # read -r file 00:25:27.265 02:00:12 -- host/multicontroller.sh@108 -- # nvmftestfini 00:25:27.265 02:00:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:27.265 02:00:12 -- nvmf/common.sh@116 -- # sync 00:25:27.265 02:00:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:27.265 02:00:12 -- nvmf/common.sh@119 -- # set +e 00:25:27.265 02:00:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:27.265 02:00:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:27.265 rmmod nvme_tcp 00:25:27.265 rmmod nvme_fabrics 00:25:27.265 rmmod nvme_keyring 00:25:27.266 02:00:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:27.266 02:00:12 -- nvmf/common.sh@123 -- # set -e 00:25:27.266 02:00:12 -- nvmf/common.sh@124 -- # return 0 00:25:27.266 02:00:12 -- nvmf/common.sh@477 -- # '[' -n 2237137 ']' 00:25:27.266 02:00:12 -- nvmf/common.sh@478 -- # killprocess 2237137 00:25:27.266 02:00:12 -- common/autotest_common.sh@926 -- # '[' -z 2237137 ']' 00:25:27.266 02:00:12 -- common/autotest_common.sh@930 -- # kill -0 2237137 00:25:27.266 02:00:12 -- common/autotest_common.sh@931 -- # uname 00:25:27.266 02:00:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:27.266 02:00:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2237137 00:25:27.266 02:00:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:25:27.266 02:00:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:25:27.266 02:00:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2237137' 00:25:27.266 killing process with pid 2237137 00:25:27.266 02:00:12 -- common/autotest_common.sh@945 -- # kill 2237137 00:25:27.266 02:00:12 -- common/autotest_common.sh@950 -- # wait 2237137 00:25:27.525 02:00:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:27.525 02:00:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:27.525 02:00:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:27.525 02:00:13 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:27.525 02:00:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:27.525 02:00:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:27.525 02:00:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:27.525 02:00:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.060 02:00:15 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:30.060 00:25:30.060 real 0m8.437s 00:25:30.060 user 0m16.086s 00:25:30.060 sys 0m2.247s 00:25:30.060 02:00:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:30.060 02:00:15 -- common/autotest_common.sh@10 -- # set +x 00:25:30.060 ************************************ 00:25:30.060 END TEST nvmf_multicontroller 00:25:30.060 ************************************ 00:25:30.060 02:00:15 -- nvmf/nvmf.sh@91 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:30.060 02:00:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:30.060 02:00:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:30.060 02:00:15 -- common/autotest_common.sh@10 -- # set +x 00:25:30.060 ************************************ 00:25:30.060 START TEST nvmf_aer 00:25:30.060 ************************************ 00:25:30.060 02:00:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:30.060 * Looking for test storage... 00:25:30.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:30.060 02:00:15 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:30.060 02:00:15 -- nvmf/common.sh@7 -- # uname -s 00:25:30.060 02:00:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:30.060 02:00:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:30.060 02:00:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:30.060 02:00:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:30.060 02:00:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:30.060 02:00:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:30.060 02:00:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:30.060 02:00:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:30.060 02:00:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:30.060 02:00:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:30.060 02:00:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.060 02:00:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:30.060 02:00:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:30.060 02:00:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:30.060 02:00:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:30.060 02:00:15 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:30.060 02:00:15 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:30.060 02:00:15 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:30.060 02:00:15 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:30.060 02:00:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.060 02:00:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.061 02:00:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.061 02:00:15 -- paths/export.sh@5 -- # export PATH 00:25:30.061 02:00:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:30.061 02:00:15 -- nvmf/common.sh@46 -- # : 0 00:25:30.061 02:00:15 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:30.061 02:00:15 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:30.061 02:00:15 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:30.061 02:00:15 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:30.061 02:00:15 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:30.061 02:00:15 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:30.061 02:00:15 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:30.061 02:00:15 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:30.061 02:00:15 -- host/aer.sh@11 -- # nvmftestinit 00:25:30.061 02:00:15 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:30.061 02:00:15 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:30.061 02:00:15 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:30.061 02:00:15 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:30.061 02:00:15 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:30.061 02:00:15 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:30.061 02:00:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:30.061 02:00:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:30.061 02:00:15 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:30.061 02:00:15 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:30.061 02:00:15 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:30.061 02:00:15 -- common/autotest_common.sh@10 -- # set +x 00:25:31.964 02:00:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:31.964 02:00:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:31.964 02:00:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:31.964 02:00:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:31.964 02:00:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:31.964 02:00:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:31.964 02:00:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:31.964 02:00:17 -- nvmf/common.sh@294 -- # net_devs=() 00:25:31.964 02:00:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:31.964 02:00:17 -- nvmf/common.sh@295 -- # e810=() 00:25:31.964 02:00:17 -- nvmf/common.sh@295 -- # local -ga e810 00:25:31.964 02:00:17 -- nvmf/common.sh@296 -- # x722=() 00:25:31.964 02:00:17 -- nvmf/common.sh@296 -- # local -ga x722 00:25:31.964 02:00:17 -- nvmf/common.sh@297 -- # mlx=() 00:25:31.964 02:00:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:31.964 02:00:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.964 02:00:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.964 02:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:31.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:31.964 02:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:31.964 02:00:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:31.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:31.964 02:00:17 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.964 02:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.964 02:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.964 02:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:31.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:31.964 02:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:31.964 02:00:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.964 02:00:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.964 02:00:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:31.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:31.964 02:00:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:31.964 02:00:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:31.964 02:00:17 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.964 02:00:17 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.964 02:00:17 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:31.964 02:00:17 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.964 02:00:17 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.964 02:00:17 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:31.964 02:00:17 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.964 02:00:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.964 02:00:17 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:31.964 02:00:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:31.964 02:00:17 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.964 02:00:17 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.964 02:00:17 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.964 02:00:17 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.964 02:00:17 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:31.964 02:00:17 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.964 02:00:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.964 02:00:17 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.964 02:00:17 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:31.964 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.964 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:25:31.964 00:25:31.964 --- 10.0.0.2 ping statistics --- 00:25:31.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.964 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:25:31.964 02:00:17 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.964 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.964 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.272 ms 00:25:31.964 00:25:31.964 --- 10.0.0.1 ping statistics --- 00:25:31.964 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.964 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:25:31.964 02:00:17 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.964 02:00:17 -- nvmf/common.sh@410 -- # return 0 00:25:31.964 02:00:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:31.964 02:00:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.964 02:00:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:31.964 02:00:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.964 02:00:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:31.964 02:00:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:31.964 02:00:17 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:31.964 02:00:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:31.964 02:00:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:31.964 02:00:17 -- common/autotest_common.sh@10 -- # set +x 00:25:31.964 02:00:17 -- nvmf/common.sh@469 -- # nvmfpid=2239641 00:25:31.964 02:00:17 -- nvmf/common.sh@470 -- # waitforlisten 2239641 00:25:31.964 02:00:17 -- common/autotest_common.sh@819 -- # '[' -z 2239641 ']' 00:25:31.964 02:00:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.964 02:00:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:31.964 02:00:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.964 02:00:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:31.964 02:00:17 -- common/autotest_common.sh@10 -- # set +x 00:25:31.964 02:00:17 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.964 [2024-04-15 02:00:17.537449] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:31.964 [2024-04-15 02:00:17.537535] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.964 EAL: No free 2048 kB hugepages reported on node 1 00:25:31.964 [2024-04-15 02:00:17.609371] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:32.221 [2024-04-15 02:00:17.704192] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:32.221 [2024-04-15 02:00:17.704330] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:32.221 [2024-04-15 02:00:17.704347] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:32.222 [2024-04-15 02:00:17.704360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:32.222 [2024-04-15 02:00:17.708073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.222 [2024-04-15 02:00:17.708129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.222 [2024-04-15 02:00:17.708203] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:32.222 [2024-04-15 02:00:17.708205] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:33.192 02:00:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:33.192 02:00:18 -- common/autotest_common.sh@852 -- # return 0 00:25:33.192 02:00:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:33.192 02:00:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 02:00:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:33.192 02:00:18 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 [2024-04-15 02:00:18.522646] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:33.192 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.192 02:00:18 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 Malloc0 00:25:33.192 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.192 02:00:18 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.192 02:00:18 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.192 02:00:18 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 [2024-04-15 02:00:18.575459] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.192 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.192 02:00:18 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:33.192 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.192 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.192 [2024-04-15 02:00:18.583173] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:33.192 [ 00:25:33.192 { 00:25:33.192 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.192 "subtype": "Discovery", 00:25:33.192 "listen_addresses": [], 00:25:33.192 "allow_any_host": true, 00:25:33.192 "hosts": [] 00:25:33.192 }, 00:25:33.192 { 00:25:33.192 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.192 "subtype": "NVMe", 00:25:33.192 "listen_addresses": [ 00:25:33.192 { 00:25:33.192 "transport": "TCP", 00:25:33.192 "trtype": "TCP", 00:25:33.192 "adrfam": "IPv4", 00:25:33.192 "traddr": "10.0.0.2", 00:25:33.192 "trsvcid": "4420" 00:25:33.192 } 00:25:33.192 ], 00:25:33.192 "allow_any_host": true, 00:25:33.192 "hosts": [], 00:25:33.192 "serial_number": "SPDK00000000000001", 00:25:33.192 "model_number": "SPDK bdev Controller", 00:25:33.192 "max_namespaces": 2, 00:25:33.192 "min_cntlid": 1, 00:25:33.192 "max_cntlid": 65519, 00:25:33.192 "namespaces": [ 00:25:33.192 { 00:25:33.192 "nsid": 1, 00:25:33.192 "bdev_name": "Malloc0", 00:25:33.192 "name": "Malloc0", 00:25:33.192 "nguid": "17204D67E5E04FC190A7D3EE9A4AD41E", 00:25:33.192 "uuid": "17204d67-e5e0-4fc1-90a7-d3ee9a4ad41e" 00:25:33.193 } 00:25:33.193 ] 00:25:33.193 } 00:25:33.193 ] 00:25:33.193 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.193 02:00:18 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:33.193 02:00:18 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:33.193 02:00:18 -- host/aer.sh@33 -- # aerpid=2239802 00:25:33.193 02:00:18 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:33.193 02:00:18 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:33.193 02:00:18 -- common/autotest_common.sh@1244 -- # local i=0 00:25:33.193 02:00:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1247 -- # i=1 00:25:33.193 02:00:18 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:33.193 EAL: No free 2048 kB hugepages reported on node 1 00:25:33.193 02:00:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1247 -- # i=2 00:25:33.193 02:00:18 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:25:33.193 02:00:18 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:33.193 02:00:18 -- common/autotest_common.sh@1255 -- # return 0 00:25:33.193 02:00:18 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:33.193 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.193 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 Malloc1 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:33.473 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.473 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:33.473 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.473 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 [ 00:25:33.473 { 00:25:33.473 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:33.473 "subtype": "Discovery", 00:25:33.473 "listen_addresses": [], 00:25:33.473 "allow_any_host": true, 00:25:33.473 "hosts": [] 00:25:33.473 }, 00:25:33.473 { 00:25:33.473 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.473 "subtype": "NVMe", 00:25:33.473 "listen_addresses": [ 00:25:33.473 { 00:25:33.473 "transport": "TCP", 00:25:33.473 "trtype": "TCP", 00:25:33.473 "adrfam": "IPv4", 00:25:33.473 "traddr": "10.0.0.2", 00:25:33.473 "trsvcid": "4420" 00:25:33.473 } 00:25:33.473 ], 00:25:33.473 "allow_any_host": true, 00:25:33.473 "hosts": [], 00:25:33.473 "serial_number": "SPDK00000000000001", 00:25:33.473 "model_number": "SPDK bdev Controller", 00:25:33.473 "max_namespaces": 2, 00:25:33.473 "min_cntlid": 1, 00:25:33.473 "max_cntlid": 65519, 00:25:33.473 "namespaces": [ 00:25:33.473 { 00:25:33.473 "nsid": 1, 00:25:33.473 "bdev_name": "Malloc0", 00:25:33.473 "name": "Malloc0", 00:25:33.473 "nguid": "17204D67E5E04FC190A7D3EE9A4AD41E", 00:25:33.473 "uuid": "17204d67-e5e0-4fc1-90a7-d3ee9a4ad41e" 00:25:33.473 }, 00:25:33.473 { 00:25:33.473 "nsid": 2, 00:25:33.473 "bdev_name": "Malloc1", 00:25:33.473 "name": "Malloc1", 00:25:33.473 "nguid": "4369C69BF2E442E2879D519DADC74AA6", 00:25:33.473 "uuid": "4369c69b-f2e4-42e2-879d-519dadc74aa6" 00:25:33.473 } 00:25:33.473 ] 00:25:33.473 } 00:25:33.473 ] 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@43 -- # wait 2239802 00:25:33.473 Asynchronous Event Request test 00:25:33.473 Attaching to 10.0.0.2 00:25:33.473 Attached to 10.0.0.2 00:25:33.473 Registering asynchronous event callbacks... 00:25:33.473 Starting namespace attribute notice tests for all controllers... 00:25:33.473 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:33.473 aer_cb - Changed Namespace 00:25:33.473 Cleaning up... 00:25:33.473 02:00:18 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:33.473 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.473 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:33.473 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.473 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:33.473 02:00:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.473 02:00:18 -- common/autotest_common.sh@10 -- # set +x 00:25:33.473 02:00:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.473 02:00:18 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:33.473 02:00:18 -- host/aer.sh@51 -- # nvmftestfini 00:25:33.473 02:00:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:33.473 02:00:18 -- nvmf/common.sh@116 -- # sync 00:25:33.473 02:00:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:33.473 02:00:18 -- nvmf/common.sh@119 -- # set +e 00:25:33.473 02:00:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:33.473 02:00:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:33.473 rmmod nvme_tcp 00:25:33.473 rmmod nvme_fabrics 00:25:33.473 rmmod nvme_keyring 00:25:33.473 02:00:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:33.473 02:00:19 -- nvmf/common.sh@123 -- # set -e 00:25:33.473 02:00:19 -- nvmf/common.sh@124 -- # return 0 00:25:33.473 02:00:19 -- nvmf/common.sh@477 -- # '[' -n 2239641 ']' 00:25:33.473 02:00:19 -- nvmf/common.sh@478 -- # killprocess 2239641 00:25:33.473 02:00:19 -- common/autotest_common.sh@926 -- # '[' -z 2239641 ']' 00:25:33.473 02:00:19 -- common/autotest_common.sh@930 -- # kill -0 2239641 00:25:33.473 02:00:19 -- common/autotest_common.sh@931 -- # uname 00:25:33.473 02:00:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:33.473 02:00:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2239641 00:25:33.473 02:00:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:33.473 02:00:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:33.473 02:00:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2239641' 00:25:33.473 killing process with pid 2239641 00:25:33.473 02:00:19 -- common/autotest_common.sh@945 -- # kill 2239641 00:25:33.473 [2024-04-15 02:00:19.038378] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:33.473 02:00:19 -- common/autotest_common.sh@950 -- # wait 2239641 00:25:33.732 02:00:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:33.732 02:00:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:33.732 02:00:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:33.732 02:00:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:33.732 02:00:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:33.732 02:00:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:33.732 02:00:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:33.732 02:00:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.265 02:00:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:36.265 00:25:36.265 real 0m6.100s 00:25:36.265 user 0m7.024s 00:25:36.265 sys 0m2.042s 00:25:36.265 02:00:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:36.265 02:00:21 -- common/autotest_common.sh@10 -- # set +x 00:25:36.265 ************************************ 00:25:36.265 END TEST nvmf_aer 00:25:36.265 ************************************ 00:25:36.265 02:00:21 -- nvmf/nvmf.sh@92 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:36.265 02:00:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:36.265 02:00:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:36.265 02:00:21 -- common/autotest_common.sh@10 -- # set +x 00:25:36.265 ************************************ 00:25:36.265 START TEST nvmf_async_init 00:25:36.265 ************************************ 00:25:36.265 02:00:21 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:36.265 * Looking for test storage... 00:25:36.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:36.265 02:00:21 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:36.265 02:00:21 -- nvmf/common.sh@7 -- # uname -s 00:25:36.265 02:00:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:36.265 02:00:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:36.265 02:00:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:36.265 02:00:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:36.265 02:00:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:36.265 02:00:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:36.265 02:00:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:36.265 02:00:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:36.265 02:00:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:36.265 02:00:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:36.265 02:00:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.265 02:00:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:36.265 02:00:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:36.265 02:00:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:36.265 02:00:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:36.265 02:00:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:36.265 02:00:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:36.265 02:00:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:36.265 02:00:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:36.265 02:00:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.265 02:00:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.265 02:00:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.265 02:00:21 -- paths/export.sh@5 -- # export PATH 00:25:36.265 02:00:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:36.265 02:00:21 -- nvmf/common.sh@46 -- # : 0 00:25:36.265 02:00:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:36.265 02:00:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:36.265 02:00:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:36.265 02:00:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:36.265 02:00:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:36.265 02:00:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:36.265 02:00:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:36.265 02:00:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:36.265 02:00:21 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:36.265 02:00:21 -- host/async_init.sh@14 -- # null_block_size=512 00:25:36.265 02:00:21 -- host/async_init.sh@15 -- # null_bdev=null0 00:25:36.265 02:00:21 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:36.265 02:00:21 -- host/async_init.sh@20 -- # uuidgen 00:25:36.265 02:00:21 -- host/async_init.sh@20 -- # tr -d - 00:25:36.265 02:00:21 -- host/async_init.sh@20 -- # nguid=6805792bc9a94dd8b34a9b5c0bcfeb84 00:25:36.265 02:00:21 -- host/async_init.sh@22 -- # nvmftestinit 00:25:36.265 02:00:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:36.265 02:00:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:36.265 02:00:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:36.265 02:00:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:36.265 02:00:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:36.265 02:00:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:36.265 02:00:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:36.265 02:00:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:36.265 02:00:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:36.265 02:00:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:36.265 02:00:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:36.265 02:00:21 -- common/autotest_common.sh@10 -- # set +x 00:25:37.641 02:00:23 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:37.641 02:00:23 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:37.641 02:00:23 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:37.641 02:00:23 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:37.641 02:00:23 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:37.641 02:00:23 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:37.641 02:00:23 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:37.641 02:00:23 -- nvmf/common.sh@294 -- # net_devs=() 00:25:37.641 02:00:23 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:37.641 02:00:23 -- nvmf/common.sh@295 -- # e810=() 00:25:37.641 02:00:23 -- nvmf/common.sh@295 -- # local -ga e810 00:25:37.641 02:00:23 -- nvmf/common.sh@296 -- # x722=() 00:25:37.641 02:00:23 -- nvmf/common.sh@296 -- # local -ga x722 00:25:37.641 02:00:23 -- nvmf/common.sh@297 -- # mlx=() 00:25:37.641 02:00:23 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:37.641 02:00:23 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:37.641 02:00:23 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:37.641 02:00:23 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:37.641 02:00:23 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:37.641 02:00:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:37.641 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:37.641 02:00:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:37.641 02:00:23 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:37.641 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:37.641 02:00:23 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:37.641 02:00:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.641 02:00:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.641 02:00:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:37.641 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:37.641 02:00:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.641 02:00:23 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:37.641 02:00:23 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:37.641 02:00:23 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:37.641 02:00:23 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:37.641 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:37.641 02:00:23 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:37.641 02:00:23 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:37.641 02:00:23 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:37.641 02:00:23 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:37.641 02:00:23 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:37.641 02:00:23 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:37.641 02:00:23 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:37.641 02:00:23 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:37.641 02:00:23 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:37.641 02:00:23 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:37.641 02:00:23 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:37.641 02:00:23 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:37.641 02:00:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:37.641 02:00:23 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:37.641 02:00:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:37.900 02:00:23 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:37.900 02:00:23 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:37.900 02:00:23 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:37.900 02:00:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:37.900 02:00:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:37.900 02:00:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:37.900 02:00:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:37.900 02:00:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:37.900 02:00:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:37.900 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:37.900 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:25:37.900 00:25:37.900 --- 10.0.0.2 ping statistics --- 00:25:37.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.900 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:25:37.900 02:00:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:37.900 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:37.900 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:25:37.900 00:25:37.900 --- 10.0.0.1 ping statistics --- 00:25:37.900 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:37.900 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:25:37.900 02:00:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:37.900 02:00:23 -- nvmf/common.sh@410 -- # return 0 00:25:37.900 02:00:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:37.900 02:00:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:37.900 02:00:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:37.900 02:00:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:37.900 02:00:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:37.900 02:00:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:37.900 02:00:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:37.900 02:00:23 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:37.900 02:00:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:37.900 02:00:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:37.900 02:00:23 -- common/autotest_common.sh@10 -- # set +x 00:25:37.900 02:00:23 -- nvmf/common.sh@469 -- # nvmfpid=2241872 00:25:37.900 02:00:23 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:37.900 02:00:23 -- nvmf/common.sh@470 -- # waitforlisten 2241872 00:25:37.900 02:00:23 -- common/autotest_common.sh@819 -- # '[' -z 2241872 ']' 00:25:37.900 02:00:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:37.900 02:00:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:37.900 02:00:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:37.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:37.900 02:00:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:37.900 02:00:23 -- common/autotest_common.sh@10 -- # set +x 00:25:37.900 [2024-04-15 02:00:23.493467] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:37.900 [2024-04-15 02:00:23.493553] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:37.900 EAL: No free 2048 kB hugepages reported on node 1 00:25:38.159 [2024-04-15 02:00:23.558801] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:38.159 [2024-04-15 02:00:23.640357] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:38.159 [2024-04-15 02:00:23.640532] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:38.159 [2024-04-15 02:00:23.640548] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:38.159 [2024-04-15 02:00:23.640560] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:38.159 [2024-04-15 02:00:23.640593] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:39.092 02:00:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:39.093 02:00:24 -- common/autotest_common.sh@852 -- # return 0 00:25:39.093 02:00:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:39.093 02:00:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 02:00:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:39.093 02:00:24 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 [2024-04-15 02:00:24.493707] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 null0 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 6805792bc9a94dd8b34a9b5c0bcfeb84 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.093 [2024-04-15 02:00:24.533925] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:39.093 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.093 02:00:24 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:39.093 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.093 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 nvme0n1 00:25:39.351 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.351 02:00:24 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.351 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.351 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 [ 00:25:39.351 { 00:25:39.351 "name": "nvme0n1", 00:25:39.351 "aliases": [ 00:25:39.351 "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84" 00:25:39.351 ], 00:25:39.351 "product_name": "NVMe disk", 00:25:39.351 "block_size": 512, 00:25:39.351 "num_blocks": 2097152, 00:25:39.351 "uuid": "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84", 00:25:39.351 "assigned_rate_limits": { 00:25:39.351 "rw_ios_per_sec": 0, 00:25:39.351 "rw_mbytes_per_sec": 0, 00:25:39.351 "r_mbytes_per_sec": 0, 00:25:39.351 "w_mbytes_per_sec": 0 00:25:39.351 }, 00:25:39.351 "claimed": false, 00:25:39.351 "zoned": false, 00:25:39.351 "supported_io_types": { 00:25:39.351 "read": true, 00:25:39.351 "write": true, 00:25:39.351 "unmap": false, 00:25:39.351 "write_zeroes": true, 00:25:39.351 "flush": true, 00:25:39.351 "reset": true, 00:25:39.351 "compare": true, 00:25:39.351 "compare_and_write": true, 00:25:39.351 "abort": true, 00:25:39.351 "nvme_admin": true, 00:25:39.351 "nvme_io": true 00:25:39.351 }, 00:25:39.351 "driver_specific": { 00:25:39.351 "nvme": [ 00:25:39.351 { 00:25:39.351 "trid": { 00:25:39.351 "trtype": "TCP", 00:25:39.351 "adrfam": "IPv4", 00:25:39.351 "traddr": "10.0.0.2", 00:25:39.351 "trsvcid": "4420", 00:25:39.351 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.351 }, 00:25:39.351 "ctrlr_data": { 00:25:39.351 "cntlid": 1, 00:25:39.351 "vendor_id": "0x8086", 00:25:39.351 "model_number": "SPDK bdev Controller", 00:25:39.351 "serial_number": "00000000000000000000", 00:25:39.351 "firmware_revision": "24.01.1", 00:25:39.351 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.351 "oacs": { 00:25:39.351 "security": 0, 00:25:39.351 "format": 0, 00:25:39.351 "firmware": 0, 00:25:39.351 "ns_manage": 0 00:25:39.351 }, 00:25:39.351 "multi_ctrlr": true, 00:25:39.351 "ana_reporting": false 00:25:39.351 }, 00:25:39.351 "vs": { 00:25:39.351 "nvme_version": "1.3" 00:25:39.351 }, 00:25:39.351 "ns_data": { 00:25:39.351 "id": 1, 00:25:39.351 "can_share": true 00:25:39.351 } 00:25:39.351 } 00:25:39.351 ], 00:25:39.351 "mp_policy": "active_passive" 00:25:39.351 } 00:25:39.351 } 00:25:39.351 ] 00:25:39.351 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.351 02:00:24 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:39.351 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.351 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 [2024-04-15 02:00:24.782548] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:39.351 [2024-04-15 02:00:24.782635] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x198d020 (9): Bad file descriptor 00:25:39.351 [2024-04-15 02:00:24.915202] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:25:39.351 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.351 02:00:24 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.351 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.351 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.351 [ 00:25:39.351 { 00:25:39.351 "name": "nvme0n1", 00:25:39.351 "aliases": [ 00:25:39.351 "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84" 00:25:39.351 ], 00:25:39.351 "product_name": "NVMe disk", 00:25:39.351 "block_size": 512, 00:25:39.351 "num_blocks": 2097152, 00:25:39.351 "uuid": "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84", 00:25:39.351 "assigned_rate_limits": { 00:25:39.352 "rw_ios_per_sec": 0, 00:25:39.352 "rw_mbytes_per_sec": 0, 00:25:39.352 "r_mbytes_per_sec": 0, 00:25:39.352 "w_mbytes_per_sec": 0 00:25:39.352 }, 00:25:39.352 "claimed": false, 00:25:39.352 "zoned": false, 00:25:39.352 "supported_io_types": { 00:25:39.352 "read": true, 00:25:39.352 "write": true, 00:25:39.352 "unmap": false, 00:25:39.352 "write_zeroes": true, 00:25:39.352 "flush": true, 00:25:39.352 "reset": true, 00:25:39.352 "compare": true, 00:25:39.352 "compare_and_write": true, 00:25:39.352 "abort": true, 00:25:39.352 "nvme_admin": true, 00:25:39.352 "nvme_io": true 00:25:39.352 }, 00:25:39.352 "driver_specific": { 00:25:39.352 "nvme": [ 00:25:39.352 { 00:25:39.352 "trid": { 00:25:39.352 "trtype": "TCP", 00:25:39.352 "adrfam": "IPv4", 00:25:39.352 "traddr": "10.0.0.2", 00:25:39.352 "trsvcid": "4420", 00:25:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.352 }, 00:25:39.352 "ctrlr_data": { 00:25:39.352 "cntlid": 2, 00:25:39.352 "vendor_id": "0x8086", 00:25:39.352 "model_number": "SPDK bdev Controller", 00:25:39.352 "serial_number": "00000000000000000000", 00:25:39.352 "firmware_revision": "24.01.1", 00:25:39.352 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.352 "oacs": { 00:25:39.352 "security": 0, 00:25:39.352 "format": 0, 00:25:39.352 "firmware": 0, 00:25:39.352 "ns_manage": 0 00:25:39.352 }, 00:25:39.352 "multi_ctrlr": true, 00:25:39.352 "ana_reporting": false 00:25:39.352 }, 00:25:39.352 "vs": { 00:25:39.352 "nvme_version": "1.3" 00:25:39.352 }, 00:25:39.352 "ns_data": { 00:25:39.352 "id": 1, 00:25:39.352 "can_share": true 00:25:39.352 } 00:25:39.352 } 00:25:39.352 ], 00:25:39.352 "mp_policy": "active_passive" 00:25:39.352 } 00:25:39.352 } 00:25:39.352 ] 00:25:39.352 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.352 02:00:24 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.352 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.352 02:00:24 -- host/async_init.sh@53 -- # mktemp 00:25:39.352 02:00:24 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ppSPtufaYD 00:25:39.352 02:00:24 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:39.352 02:00:24 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ppSPtufaYD 00:25:39.352 02:00:24 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.352 02:00:24 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:39.352 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 [2024-04-15 02:00:24.963200] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:39.352 [2024-04-15 02:00:24.963378] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:39.352 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.352 02:00:24 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ppSPtufaYD 00:25:39.352 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 02:00:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.352 02:00:24 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ppSPtufaYD 00:25:39.352 02:00:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.352 02:00:24 -- common/autotest_common.sh@10 -- # set +x 00:25:39.352 [2024-04-15 02:00:24.979207] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:39.610 nvme0n1 00:25:39.610 02:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.610 02:00:25 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:39.610 02:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.610 02:00:25 -- common/autotest_common.sh@10 -- # set +x 00:25:39.610 [ 00:25:39.610 { 00:25:39.610 "name": "nvme0n1", 00:25:39.610 "aliases": [ 00:25:39.610 "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84" 00:25:39.610 ], 00:25:39.610 "product_name": "NVMe disk", 00:25:39.610 "block_size": 512, 00:25:39.610 "num_blocks": 2097152, 00:25:39.610 "uuid": "6805792b-c9a9-4dd8-b34a-9b5c0bcfeb84", 00:25:39.610 "assigned_rate_limits": { 00:25:39.610 "rw_ios_per_sec": 0, 00:25:39.610 "rw_mbytes_per_sec": 0, 00:25:39.610 "r_mbytes_per_sec": 0, 00:25:39.610 "w_mbytes_per_sec": 0 00:25:39.610 }, 00:25:39.610 "claimed": false, 00:25:39.610 "zoned": false, 00:25:39.610 "supported_io_types": { 00:25:39.610 "read": true, 00:25:39.610 "write": true, 00:25:39.610 "unmap": false, 00:25:39.610 "write_zeroes": true, 00:25:39.610 "flush": true, 00:25:39.610 "reset": true, 00:25:39.610 "compare": true, 00:25:39.610 "compare_and_write": true, 00:25:39.610 "abort": true, 00:25:39.610 "nvme_admin": true, 00:25:39.610 "nvme_io": true 00:25:39.610 }, 00:25:39.610 "driver_specific": { 00:25:39.610 "nvme": [ 00:25:39.610 { 00:25:39.610 "trid": { 00:25:39.610 "trtype": "TCP", 00:25:39.610 "adrfam": "IPv4", 00:25:39.610 "traddr": "10.0.0.2", 00:25:39.610 "trsvcid": "4421", 00:25:39.610 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:39.610 }, 00:25:39.610 "ctrlr_data": { 00:25:39.610 "cntlid": 3, 00:25:39.610 "vendor_id": "0x8086", 00:25:39.610 "model_number": "SPDK bdev Controller", 00:25:39.610 "serial_number": "00000000000000000000", 00:25:39.610 "firmware_revision": "24.01.1", 00:25:39.610 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:39.610 "oacs": { 00:25:39.610 "security": 0, 00:25:39.610 "format": 0, 00:25:39.610 "firmware": 0, 00:25:39.610 "ns_manage": 0 00:25:39.610 }, 00:25:39.610 "multi_ctrlr": true, 00:25:39.610 "ana_reporting": false 00:25:39.610 }, 00:25:39.610 "vs": { 00:25:39.610 "nvme_version": "1.3" 00:25:39.610 }, 00:25:39.610 "ns_data": { 00:25:39.610 "id": 1, 00:25:39.610 "can_share": true 00:25:39.610 } 00:25:39.610 } 00:25:39.610 ], 00:25:39.610 "mp_policy": "active_passive" 00:25:39.610 } 00:25:39.610 } 00:25:39.610 ] 00:25:39.610 02:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.610 02:00:25 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.610 02:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:39.610 02:00:25 -- common/autotest_common.sh@10 -- # set +x 00:25:39.610 02:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:39.610 02:00:25 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ppSPtufaYD 00:25:39.610 02:00:25 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:25:39.610 02:00:25 -- host/async_init.sh@78 -- # nvmftestfini 00:25:39.610 02:00:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:39.610 02:00:25 -- nvmf/common.sh@116 -- # sync 00:25:39.610 02:00:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:39.610 02:00:25 -- nvmf/common.sh@119 -- # set +e 00:25:39.610 02:00:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:39.610 02:00:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:39.610 rmmod nvme_tcp 00:25:39.610 rmmod nvme_fabrics 00:25:39.610 rmmod nvme_keyring 00:25:39.610 02:00:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:39.611 02:00:25 -- nvmf/common.sh@123 -- # set -e 00:25:39.611 02:00:25 -- nvmf/common.sh@124 -- # return 0 00:25:39.611 02:00:25 -- nvmf/common.sh@477 -- # '[' -n 2241872 ']' 00:25:39.611 02:00:25 -- nvmf/common.sh@478 -- # killprocess 2241872 00:25:39.611 02:00:25 -- common/autotest_common.sh@926 -- # '[' -z 2241872 ']' 00:25:39.611 02:00:25 -- common/autotest_common.sh@930 -- # kill -0 2241872 00:25:39.611 02:00:25 -- common/autotest_common.sh@931 -- # uname 00:25:39.611 02:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:39.611 02:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2241872 00:25:39.611 02:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:39.611 02:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:39.611 02:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2241872' 00:25:39.611 killing process with pid 2241872 00:25:39.611 02:00:25 -- common/autotest_common.sh@945 -- # kill 2241872 00:25:39.611 02:00:25 -- common/autotest_common.sh@950 -- # wait 2241872 00:25:39.869 02:00:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:39.869 02:00:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:39.869 02:00:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:39.869 02:00:25 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:39.869 02:00:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:39.869 02:00:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:39.869 02:00:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:39.869 02:00:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:41.776 02:00:27 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:41.776 00:25:41.776 real 0m6.060s 00:25:41.776 user 0m2.864s 00:25:41.776 sys 0m1.809s 00:25:41.776 02:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:41.776 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:25:41.776 ************************************ 00:25:41.776 END TEST nvmf_async_init 00:25:41.776 ************************************ 00:25:42.035 02:00:27 -- nvmf/nvmf.sh@93 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:42.035 02:00:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:42.035 02:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.035 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:25:42.035 ************************************ 00:25:42.035 START TEST dma 00:25:42.035 ************************************ 00:25:42.036 02:00:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:42.036 * Looking for test storage... 00:25:42.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.036 02:00:27 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.036 02:00:27 -- nvmf/common.sh@7 -- # uname -s 00:25:42.036 02:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.036 02:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.036 02:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.036 02:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.036 02:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.036 02:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.036 02:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.036 02:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.036 02:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.036 02:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.036 02:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.036 02:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.036 02:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.036 02:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.036 02:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.036 02:00:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.036 02:00:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.036 02:00:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.036 02:00:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.036 02:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@5 -- # export PATH 00:25:42.036 02:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- nvmf/common.sh@46 -- # : 0 00:25:42.036 02:00:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.036 02:00:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.036 02:00:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.036 02:00:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.036 02:00:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.036 02:00:27 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:42.036 02:00:27 -- host/dma.sh@13 -- # exit 0 00:25:42.036 00:25:42.036 real 0m0.067s 00:25:42.036 user 0m0.028s 00:25:42.036 sys 0m0.045s 00:25:42.036 02:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:42.036 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:25:42.036 ************************************ 00:25:42.036 END TEST dma 00:25:42.036 ************************************ 00:25:42.036 02:00:27 -- nvmf/nvmf.sh@96 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:42.036 02:00:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:42.036 02:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:42.036 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:25:42.036 ************************************ 00:25:42.036 START TEST nvmf_identify 00:25:42.036 ************************************ 00:25:42.036 02:00:27 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:42.036 * Looking for test storage... 00:25:42.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.036 02:00:27 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.036 02:00:27 -- nvmf/common.sh@7 -- # uname -s 00:25:42.036 02:00:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.036 02:00:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.036 02:00:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.036 02:00:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.036 02:00:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:42.036 02:00:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:42.036 02:00:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.036 02:00:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:42.036 02:00:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.036 02:00:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:42.036 02:00:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.036 02:00:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:42.036 02:00:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.036 02:00:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:42.036 02:00:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:42.036 02:00:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.036 02:00:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.036 02:00:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.036 02:00:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.036 02:00:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- paths/export.sh@5 -- # export PATH 00:25:42.036 02:00:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.036 02:00:27 -- nvmf/common.sh@46 -- # : 0 00:25:42.036 02:00:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:42.036 02:00:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:42.036 02:00:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.036 02:00:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.036 02:00:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:42.036 02:00:27 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:42.036 02:00:27 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:42.036 02:00:27 -- host/identify.sh@14 -- # nvmftestinit 00:25:42.036 02:00:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:42.036 02:00:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.036 02:00:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:42.036 02:00:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:42.036 02:00:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:42.036 02:00:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.036 02:00:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.036 02:00:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:42.036 02:00:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:42.036 02:00:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:42.036 02:00:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:42.036 02:00:27 -- common/autotest_common.sh@10 -- # set +x 00:25:43.940 02:00:29 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:43.940 02:00:29 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:43.941 02:00:29 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:43.941 02:00:29 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:43.941 02:00:29 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:43.941 02:00:29 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:43.941 02:00:29 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:43.941 02:00:29 -- nvmf/common.sh@294 -- # net_devs=() 00:25:43.941 02:00:29 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:43.941 02:00:29 -- nvmf/common.sh@295 -- # e810=() 00:25:43.941 02:00:29 -- nvmf/common.sh@295 -- # local -ga e810 00:25:43.941 02:00:29 -- nvmf/common.sh@296 -- # x722=() 00:25:43.941 02:00:29 -- nvmf/common.sh@296 -- # local -ga x722 00:25:43.941 02:00:29 -- nvmf/common.sh@297 -- # mlx=() 00:25:43.941 02:00:29 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:43.941 02:00:29 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:43.941 02:00:29 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:43.941 02:00:29 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:43.941 02:00:29 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:43.941 02:00:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:43.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:43.941 02:00:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:43.941 02:00:29 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:43.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:43.941 02:00:29 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:43.941 02:00:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.941 02:00:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.941 02:00:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:43.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:43.941 02:00:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.941 02:00:29 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:43.941 02:00:29 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:43.941 02:00:29 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:43.941 02:00:29 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:43.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:43.941 02:00:29 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:43.941 02:00:29 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:43.941 02:00:29 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:43.941 02:00:29 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:43.941 02:00:29 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:43.941 02:00:29 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:43.941 02:00:29 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:43.941 02:00:29 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:43.941 02:00:29 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:43.941 02:00:29 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:43.941 02:00:29 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:43.941 02:00:29 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:43.941 02:00:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:43.941 02:00:29 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:43.941 02:00:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:43.941 02:00:29 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:43.941 02:00:29 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:44.200 02:00:29 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:44.200 02:00:29 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:44.200 02:00:29 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:44.200 02:00:29 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:44.200 02:00:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:44.200 02:00:29 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:44.200 02:00:29 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:44.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:44.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:25:44.200 00:25:44.200 --- 10.0.0.2 ping statistics --- 00:25:44.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.200 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:25:44.200 02:00:29 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:44.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:44.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:25:44.200 00:25:44.200 --- 10.0.0.1 ping statistics --- 00:25:44.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:44.200 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:25:44.200 02:00:29 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:44.200 02:00:29 -- nvmf/common.sh@410 -- # return 0 00:25:44.200 02:00:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:44.200 02:00:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:44.200 02:00:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:44.200 02:00:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:44.200 02:00:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:44.200 02:00:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:44.200 02:00:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:44.200 02:00:29 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:44.200 02:00:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:44.200 02:00:29 -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 02:00:29 -- host/identify.sh@19 -- # nvmfpid=2244020 00:25:44.200 02:00:29 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:44.200 02:00:29 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:44.200 02:00:29 -- host/identify.sh@23 -- # waitforlisten 2244020 00:25:44.200 02:00:29 -- common/autotest_common.sh@819 -- # '[' -z 2244020 ']' 00:25:44.200 02:00:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.200 02:00:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:44.200 02:00:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.200 02:00:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:44.200 02:00:29 -- common/autotest_common.sh@10 -- # set +x 00:25:44.200 [2024-04-15 02:00:29.761756] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:44.200 [2024-04-15 02:00:29.761831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.200 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.200 [2024-04-15 02:00:29.834217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:44.458 [2024-04-15 02:00:29.929702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:44.458 [2024-04-15 02:00:29.929872] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.458 [2024-04-15 02:00:29.929894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.458 [2024-04-15 02:00:29.929910] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.458 [2024-04-15 02:00:29.929977] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.458 [2024-04-15 02:00:29.930029] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:44.458 [2024-04-15 02:00:29.930073] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:44.458 [2024-04-15 02:00:29.930079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:45.391 02:00:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:45.391 02:00:30 -- common/autotest_common.sh@852 -- # return 0 00:25:45.391 02:00:30 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 [2024-04-15 02:00:30.753719] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:45.391 02:00:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 02:00:30 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 Malloc0 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 [2024-04-15 02:00:30.834881] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:45.391 02:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.391 02:00:30 -- common/autotest_common.sh@10 -- # set +x 00:25:45.391 [2024-04-15 02:00:30.850658] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:45.391 [ 00:25:45.391 { 00:25:45.391 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:45.391 "subtype": "Discovery", 00:25:45.391 "listen_addresses": [ 00:25:45.391 { 00:25:45.391 "transport": "TCP", 00:25:45.391 "trtype": "TCP", 00:25:45.391 "adrfam": "IPv4", 00:25:45.391 "traddr": "10.0.0.2", 00:25:45.391 "trsvcid": "4420" 00:25:45.391 } 00:25:45.391 ], 00:25:45.391 "allow_any_host": true, 00:25:45.391 "hosts": [] 00:25:45.391 }, 00:25:45.391 { 00:25:45.391 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:45.391 "subtype": "NVMe", 00:25:45.391 "listen_addresses": [ 00:25:45.391 { 00:25:45.391 "transport": "TCP", 00:25:45.391 "trtype": "TCP", 00:25:45.391 "adrfam": "IPv4", 00:25:45.391 "traddr": "10.0.0.2", 00:25:45.391 "trsvcid": "4420" 00:25:45.391 } 00:25:45.391 ], 00:25:45.391 "allow_any_host": true, 00:25:45.391 "hosts": [], 00:25:45.391 "serial_number": "SPDK00000000000001", 00:25:45.391 "model_number": "SPDK bdev Controller", 00:25:45.391 "max_namespaces": 32, 00:25:45.391 "min_cntlid": 1, 00:25:45.391 "max_cntlid": 65519, 00:25:45.391 "namespaces": [ 00:25:45.391 { 00:25:45.391 "nsid": 1, 00:25:45.391 "bdev_name": "Malloc0", 00:25:45.391 "name": "Malloc0", 00:25:45.391 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:45.391 "eui64": "ABCDEF0123456789", 00:25:45.391 "uuid": "82a301cb-ba36-4967-9cd2-b41724eb3cdb" 00:25:45.391 } 00:25:45.391 ] 00:25:45.391 } 00:25:45.391 ] 00:25:45.391 02:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.391 02:00:30 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:45.391 [2024-04-15 02:00:30.875838] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:45.391 [2024-04-15 02:00:30.875880] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244181 ] 00:25:45.391 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.391 [2024-04-15 02:00:30.907385] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:45.391 [2024-04-15 02:00:30.907445] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:45.391 [2024-04-15 02:00:30.907454] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:45.391 [2024-04-15 02:00:30.907471] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:45.391 [2024-04-15 02:00:30.907483] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:45.391 [2024-04-15 02:00:30.915083] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:45.391 [2024-04-15 02:00:30.915158] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7975a0 0 00:25:45.391 [2024-04-15 02:00:30.923055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:45.391 [2024-04-15 02:00:30.923074] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:45.391 [2024-04-15 02:00:30.923083] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:45.391 [2024-04-15 02:00:30.923089] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:45.391 [2024-04-15 02:00:30.923153] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.923166] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.923174] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.391 [2024-04-15 02:00:30.923192] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:45.391 [2024-04-15 02:00:30.923220] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.391 [2024-04-15 02:00:30.931060] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.391 [2024-04-15 02:00:30.931077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.391 [2024-04-15 02:00:30.931085] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.931092] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.391 [2024-04-15 02:00:30.931127] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:45.391 [2024-04-15 02:00:30.931139] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:45.391 [2024-04-15 02:00:30.931149] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:45.391 [2024-04-15 02:00:30.931172] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.931181] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.931188] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.391 [2024-04-15 02:00:30.931200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.391 [2024-04-15 02:00:30.931223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.391 [2024-04-15 02:00:30.931434] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.391 [2024-04-15 02:00:30.931446] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.391 [2024-04-15 02:00:30.931454] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.391 [2024-04-15 02:00:30.931461] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.391 [2024-04-15 02:00:30.931474] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:45.391 [2024-04-15 02:00:30.931488] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:45.392 [2024-04-15 02:00:30.931501] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.931509] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.931516] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.931526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.931546] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.931744] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.931759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.931767] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.931778] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.931788] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:45.392 [2024-04-15 02:00:30.931803] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.931816] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.931824] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.931831] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.931842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.931863] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.932056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.932069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.932077] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.932092] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.932109] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932126] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.932136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.932157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.932350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.932365] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.932373] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932379] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.932388] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:45.392 [2024-04-15 02:00:30.932396] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.932410] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.932533] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:45.392 [2024-04-15 02:00:30.932542] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.932557] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932565] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932571] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.932582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.932602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.932827] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.932841] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.932848] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932855] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.932864] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:45.392 [2024-04-15 02:00:30.932880] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.932896] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.932906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.932927] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.933119] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.933133] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.933140] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933147] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.933155] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:45.392 [2024-04-15 02:00:30.933164] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:45.392 [2024-04-15 02:00:30.933177] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:45.392 [2024-04-15 02:00:30.933197] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:45.392 [2024-04-15 02:00:30.933212] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933220] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.933238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.392 [2024-04-15 02:00:30.933259] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.392 [2024-04-15 02:00:30.933500] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.392 [2024-04-15 02:00:30.933516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.392 [2024-04-15 02:00:30.933524] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933531] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7975a0): datao=0, datal=4096, cccid=0 00:25:45.392 [2024-04-15 02:00:30.933539] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8023e0) on tqpair(0x7975a0): expected_datao=0, payload_size=4096 00:25:45.392 [2024-04-15 02:00:30.933552] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933560] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933646] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.392 [2024-04-15 02:00:30.933658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.392 [2024-04-15 02:00:30.933665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.392 [2024-04-15 02:00:30.933688] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:45.392 [2024-04-15 02:00:30.933703] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:45.392 [2024-04-15 02:00:30.933711] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:45.392 [2024-04-15 02:00:30.933720] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:45.392 [2024-04-15 02:00:30.933728] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:45.392 [2024-04-15 02:00:30.933736] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:45.392 [2024-04-15 02:00:30.933751] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:45.392 [2024-04-15 02:00:30.933764] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.392 [2024-04-15 02:00:30.933778] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.392 [2024-04-15 02:00:30.933790] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.392 [2024-04-15 02:00:30.933810] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.393 [2024-04-15 02:00:30.934006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:30.934022] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:30.934029] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934036] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x8023e0) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:30.934055] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934065] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934072] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.393 [2024-04-15 02:00:30.934092] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934099] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934106] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.393 [2024-04-15 02:00:30.934125] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934132] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934138] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.393 [2024-04-15 02:00:30.934158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934165] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934171] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.393 [2024-04-15 02:00:30.934190] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:45.393 [2024-04-15 02:00:30.934213] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:45.393 [2024-04-15 02:00:30.934226] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934234] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934240] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.393 [2024-04-15 02:00:30.934274] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8023e0, cid 0, qid 0 00:25:45.393 [2024-04-15 02:00:30.934285] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802540, cid 1, qid 0 00:25:45.393 [2024-04-15 02:00:30.934293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8026a0, cid 2, qid 0 00:25:45.393 [2024-04-15 02:00:30.934301] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.393 [2024-04-15 02:00:30.934309] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802960, cid 4, qid 0 00:25:45.393 [2024-04-15 02:00:30.934528] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:30.934544] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:30.934551] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802960) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:30.934567] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:45.393 [2024-04-15 02:00:30.934575] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:45.393 [2024-04-15 02:00:30.934593] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934602] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934609] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.934620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.393 [2024-04-15 02:00:30.934640] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802960, cid 4, qid 0 00:25:45.393 [2024-04-15 02:00:30.934849] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.393 [2024-04-15 02:00:30.934862] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.393 [2024-04-15 02:00:30.934869] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934875] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7975a0): datao=0, datal=4096, cccid=4 00:25:45.393 [2024-04-15 02:00:30.934883] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x802960) on tqpair(0x7975a0): expected_datao=0, payload_size=4096 00:25:45.393 [2024-04-15 02:00:30.934965] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.934974] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977061] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:30.977080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:30.977088] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977096] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802960) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:30.977116] nvme_ctrlr.c:4023:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:45.393 [2024-04-15 02:00:30.977147] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977164] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977172] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.977184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.393 [2024-04-15 02:00:30.977196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977204] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977210] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:30.977220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.393 [2024-04-15 02:00:30.977249] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802960, cid 4, qid 0 00:25:45.393 [2024-04-15 02:00:30.977262] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802ac0, cid 5, qid 0 00:25:45.393 [2024-04-15 02:00:30.977540] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.393 [2024-04-15 02:00:30.977555] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.393 [2024-04-15 02:00:30.977563] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977569] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7975a0): datao=0, datal=1024, cccid=4 00:25:45.393 [2024-04-15 02:00:30.977577] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x802960) on tqpair(0x7975a0): expected_datao=0, payload_size=1024 00:25:45.393 [2024-04-15 02:00:30.977588] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977596] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977604] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:30.977614] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:30.977621] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:30.977628] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802ac0) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:31.018227] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:31.018245] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:31.018253] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018260] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802960) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:31.018278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018288] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:31.018306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.393 [2024-04-15 02:00:31.018336] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802960, cid 4, qid 0 00:25:45.393 [2024-04-15 02:00:31.018549] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.393 [2024-04-15 02:00:31.018562] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.393 [2024-04-15 02:00:31.018569] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018576] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7975a0): datao=0, datal=3072, cccid=4 00:25:45.393 [2024-04-15 02:00:31.018584] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x802960) on tqpair(0x7975a0): expected_datao=0, payload_size=3072 00:25:45.393 [2024-04-15 02:00:31.018595] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018603] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.393 [2024-04-15 02:00:31.018717] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.393 [2024-04-15 02:00:31.018724] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018731] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802960) on tqpair=0x7975a0 00:25:45.393 [2024-04-15 02:00:31.018745] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018754] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.018761] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7975a0) 00:25:45.393 [2024-04-15 02:00:31.018771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.393 [2024-04-15 02:00:31.018799] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802960, cid 4, qid 0 00:25:45.393 [2024-04-15 02:00:31.019005] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.393 [2024-04-15 02:00:31.019021] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.393 [2024-04-15 02:00:31.019028] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.393 [2024-04-15 02:00:31.019034] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7975a0): datao=0, datal=8, cccid=4 00:25:45.394 [2024-04-15 02:00:31.019042] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x802960) on tqpair(0x7975a0): expected_datao=0, payload_size=8 00:25:45.394 [2024-04-15 02:00:31.019061] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.394 [2024-04-15 02:00:31.019069] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.655 [2024-04-15 02:00:31.059217] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.656 [2024-04-15 02:00:31.059236] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.656 [2024-04-15 02:00:31.059244] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.656 [2024-04-15 02:00:31.059251] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802960) on tqpair=0x7975a0 00:25:45.656 ===================================================== 00:25:45.656 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:45.656 ===================================================== 00:25:45.656 Controller Capabilities/Features 00:25:45.656 ================================ 00:25:45.656 Vendor ID: 0000 00:25:45.656 Subsystem Vendor ID: 0000 00:25:45.656 Serial Number: .................... 00:25:45.656 Model Number: ........................................ 00:25:45.656 Firmware Version: 24.01.1 00:25:45.656 Recommended Arb Burst: 0 00:25:45.656 IEEE OUI Identifier: 00 00 00 00:25:45.656 Multi-path I/O 00:25:45.656 May have multiple subsystem ports: No 00:25:45.656 May have multiple controllers: No 00:25:45.656 Associated with SR-IOV VF: No 00:25:45.656 Max Data Transfer Size: 131072 00:25:45.656 Max Number of Namespaces: 0 00:25:45.656 Max Number of I/O Queues: 1024 00:25:45.656 NVMe Specification Version (VS): 1.3 00:25:45.656 NVMe Specification Version (Identify): 1.3 00:25:45.656 Maximum Queue Entries: 128 00:25:45.656 Contiguous Queues Required: Yes 00:25:45.656 Arbitration Mechanisms Supported 00:25:45.656 Weighted Round Robin: Not Supported 00:25:45.656 Vendor Specific: Not Supported 00:25:45.656 Reset Timeout: 15000 ms 00:25:45.656 Doorbell Stride: 4 bytes 00:25:45.656 NVM Subsystem Reset: Not Supported 00:25:45.656 Command Sets Supported 00:25:45.656 NVM Command Set: Supported 00:25:45.656 Boot Partition: Not Supported 00:25:45.656 Memory Page Size Minimum: 4096 bytes 00:25:45.656 Memory Page Size Maximum: 4096 bytes 00:25:45.656 Persistent Memory Region: Not Supported 00:25:45.656 Optional Asynchronous Events Supported 00:25:45.656 Namespace Attribute Notices: Not Supported 00:25:45.656 Firmware Activation Notices: Not Supported 00:25:45.656 ANA Change Notices: Not Supported 00:25:45.656 PLE Aggregate Log Change Notices: Not Supported 00:25:45.656 LBA Status Info Alert Notices: Not Supported 00:25:45.656 EGE Aggregate Log Change Notices: Not Supported 00:25:45.656 Normal NVM Subsystem Shutdown event: Not Supported 00:25:45.656 Zone Descriptor Change Notices: Not Supported 00:25:45.656 Discovery Log Change Notices: Supported 00:25:45.656 Controller Attributes 00:25:45.656 128-bit Host Identifier: Not Supported 00:25:45.656 Non-Operational Permissive Mode: Not Supported 00:25:45.656 NVM Sets: Not Supported 00:25:45.656 Read Recovery Levels: Not Supported 00:25:45.656 Endurance Groups: Not Supported 00:25:45.656 Predictable Latency Mode: Not Supported 00:25:45.656 Traffic Based Keep ALive: Not Supported 00:25:45.656 Namespace Granularity: Not Supported 00:25:45.656 SQ Associations: Not Supported 00:25:45.656 UUID List: Not Supported 00:25:45.656 Multi-Domain Subsystem: Not Supported 00:25:45.656 Fixed Capacity Management: Not Supported 00:25:45.656 Variable Capacity Management: Not Supported 00:25:45.656 Delete Endurance Group: Not Supported 00:25:45.656 Delete NVM Set: Not Supported 00:25:45.656 Extended LBA Formats Supported: Not Supported 00:25:45.656 Flexible Data Placement Supported: Not Supported 00:25:45.656 00:25:45.656 Controller Memory Buffer Support 00:25:45.656 ================================ 00:25:45.656 Supported: No 00:25:45.656 00:25:45.656 Persistent Memory Region Support 00:25:45.656 ================================ 00:25:45.656 Supported: No 00:25:45.656 00:25:45.656 Admin Command Set Attributes 00:25:45.656 ============================ 00:25:45.656 Security Send/Receive: Not Supported 00:25:45.656 Format NVM: Not Supported 00:25:45.656 Firmware Activate/Download: Not Supported 00:25:45.656 Namespace Management: Not Supported 00:25:45.656 Device Self-Test: Not Supported 00:25:45.656 Directives: Not Supported 00:25:45.656 NVMe-MI: Not Supported 00:25:45.656 Virtualization Management: Not Supported 00:25:45.656 Doorbell Buffer Config: Not Supported 00:25:45.656 Get LBA Status Capability: Not Supported 00:25:45.656 Command & Feature Lockdown Capability: Not Supported 00:25:45.656 Abort Command Limit: 1 00:25:45.656 Async Event Request Limit: 4 00:25:45.656 Number of Firmware Slots: N/A 00:25:45.656 Firmware Slot 1 Read-Only: N/A 00:25:45.656 Firmware Activation Without Reset: N/A 00:25:45.656 Multiple Update Detection Support: N/A 00:25:45.656 Firmware Update Granularity: No Information Provided 00:25:45.656 Per-Namespace SMART Log: No 00:25:45.656 Asymmetric Namespace Access Log Page: Not Supported 00:25:45.656 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:45.656 Command Effects Log Page: Not Supported 00:25:45.656 Get Log Page Extended Data: Supported 00:25:45.656 Telemetry Log Pages: Not Supported 00:25:45.656 Persistent Event Log Pages: Not Supported 00:25:45.656 Supported Log Pages Log Page: May Support 00:25:45.656 Commands Supported & Effects Log Page: Not Supported 00:25:45.656 Feature Identifiers & Effects Log Page:May Support 00:25:45.656 NVMe-MI Commands & Effects Log Page: May Support 00:25:45.656 Data Area 4 for Telemetry Log: Not Supported 00:25:45.656 Error Log Page Entries Supported: 128 00:25:45.656 Keep Alive: Not Supported 00:25:45.656 00:25:45.656 NVM Command Set Attributes 00:25:45.656 ========================== 00:25:45.656 Submission Queue Entry Size 00:25:45.656 Max: 1 00:25:45.656 Min: 1 00:25:45.656 Completion Queue Entry Size 00:25:45.656 Max: 1 00:25:45.656 Min: 1 00:25:45.656 Number of Namespaces: 0 00:25:45.656 Compare Command: Not Supported 00:25:45.656 Write Uncorrectable Command: Not Supported 00:25:45.656 Dataset Management Command: Not Supported 00:25:45.656 Write Zeroes Command: Not Supported 00:25:45.656 Set Features Save Field: Not Supported 00:25:45.656 Reservations: Not Supported 00:25:45.656 Timestamp: Not Supported 00:25:45.656 Copy: Not Supported 00:25:45.656 Volatile Write Cache: Not Present 00:25:45.656 Atomic Write Unit (Normal): 1 00:25:45.656 Atomic Write Unit (PFail): 1 00:25:45.656 Atomic Compare & Write Unit: 1 00:25:45.656 Fused Compare & Write: Supported 00:25:45.656 Scatter-Gather List 00:25:45.656 SGL Command Set: Supported 00:25:45.656 SGL Keyed: Supported 00:25:45.656 SGL Bit Bucket Descriptor: Not Supported 00:25:45.656 SGL Metadata Pointer: Not Supported 00:25:45.656 Oversized SGL: Not Supported 00:25:45.656 SGL Metadata Address: Not Supported 00:25:45.656 SGL Offset: Supported 00:25:45.656 Transport SGL Data Block: Not Supported 00:25:45.656 Replay Protected Memory Block: Not Supported 00:25:45.656 00:25:45.656 Firmware Slot Information 00:25:45.656 ========================= 00:25:45.656 Active slot: 0 00:25:45.656 00:25:45.656 00:25:45.656 Error Log 00:25:45.656 ========= 00:25:45.656 00:25:45.656 Active Namespaces 00:25:45.656 ================= 00:25:45.656 Discovery Log Page 00:25:45.656 ================== 00:25:45.656 Generation Counter: 2 00:25:45.656 Number of Records: 2 00:25:45.656 Record Format: 0 00:25:45.656 00:25:45.656 Discovery Log Entry 0 00:25:45.656 ---------------------- 00:25:45.656 Transport Type: 3 (TCP) 00:25:45.656 Address Family: 1 (IPv4) 00:25:45.656 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:45.656 Entry Flags: 00:25:45.656 Duplicate Returned Information: 1 00:25:45.656 Explicit Persistent Connection Support for Discovery: 1 00:25:45.656 Transport Requirements: 00:25:45.656 Secure Channel: Not Required 00:25:45.656 Port ID: 0 (0x0000) 00:25:45.656 Controller ID: 65535 (0xffff) 00:25:45.656 Admin Max SQ Size: 128 00:25:45.656 Transport Service Identifier: 4420 00:25:45.656 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:45.656 Transport Address: 10.0.0.2 00:25:45.656 Discovery Log Entry 1 00:25:45.656 ---------------------- 00:25:45.656 Transport Type: 3 (TCP) 00:25:45.656 Address Family: 1 (IPv4) 00:25:45.656 Subsystem Type: 2 (NVM Subsystem) 00:25:45.656 Entry Flags: 00:25:45.656 Duplicate Returned Information: 0 00:25:45.656 Explicit Persistent Connection Support for Discovery: 0 00:25:45.656 Transport Requirements: 00:25:45.656 Secure Channel: Not Required 00:25:45.656 Port ID: 0 (0x0000) 00:25:45.656 Controller ID: 65535 (0xffff) 00:25:45.656 Admin Max SQ Size: 128 00:25:45.656 Transport Service Identifier: 4420 00:25:45.656 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:45.656 Transport Address: 10.0.0.2 [2024-04-15 02:00:31.059370] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:45.657 [2024-04-15 02:00:31.059394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.657 [2024-04-15 02:00:31.059407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.657 [2024-04-15 02:00:31.059417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.657 [2024-04-15 02:00:31.059427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.657 [2024-04-15 02:00:31.059444] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.059454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.059461] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.059472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.059512] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.059717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.059733] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.059741] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.059747] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.059759] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.059772] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.059780] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.059791] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.059818] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.060055] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.060069] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.060076] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060083] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.060092] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:45.657 [2024-04-15 02:00:31.060100] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:45.657 [2024-04-15 02:00:31.060116] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060125] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060132] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.060143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.060163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.060366] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.060378] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.060385] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.060408] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060425] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.060436] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.060455] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.060657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.060669] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.060676] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060683] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.060699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060708] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060715] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.060725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.060744] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.060932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.060948] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.060955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060966] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.060984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.060994] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.061000] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.061011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.061032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.065057] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.065073] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.065081] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.065088] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.065120] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.065130] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.065137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7975a0) 00:25:45.657 [2024-04-15 02:00:31.065148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.657 [2024-04-15 02:00:31.065171] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x802800, cid 3, qid 0 00:25:45.657 [2024-04-15 02:00:31.065376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.065388] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.065395] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.065402] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x802800) on tqpair=0x7975a0 00:25:45.657 [2024-04-15 02:00:31.065415] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:25:45.657 00:25:45.657 02:00:31 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:45.657 [2024-04-15 02:00:31.098302] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:45.657 [2024-04-15 02:00:31.098355] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244194 ] 00:25:45.657 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.657 [2024-04-15 02:00:31.133842] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:45.657 [2024-04-15 02:00:31.133890] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:45.657 [2024-04-15 02:00:31.133899] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:45.657 [2024-04-15 02:00:31.133913] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:45.657 [2024-04-15 02:00:31.133925] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:45.657 [2024-04-15 02:00:31.134226] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:45.657 [2024-04-15 02:00:31.134266] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x19215a0 0 00:25:45.657 [2024-04-15 02:00:31.148069] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:45.657 [2024-04-15 02:00:31.148104] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:45.657 [2024-04-15 02:00:31.148113] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:45.657 [2024-04-15 02:00:31.148119] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:45.657 [2024-04-15 02:00:31.148158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.148170] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.148177] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.657 [2024-04-15 02:00:31.148191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:45.657 [2024-04-15 02:00:31.148218] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.657 [2024-04-15 02:00:31.155074] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.657 [2024-04-15 02:00:31.155103] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.657 [2024-04-15 02:00:31.155111] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.155118] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.657 [2024-04-15 02:00:31.155152] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:45.657 [2024-04-15 02:00:31.155165] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:45.657 [2024-04-15 02:00:31.155174] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:45.657 [2024-04-15 02:00:31.155194] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.155202] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.657 [2024-04-15 02:00:31.155209] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.155221] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.155245] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.155473] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.155486] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.155494] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.155514] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:45.658 [2024-04-15 02:00:31.155529] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:45.658 [2024-04-15 02:00:31.155541] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155549] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.155567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.155588] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.155805] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.155818] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.155825] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155832] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.155842] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:45.658 [2024-04-15 02:00:31.155860] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.155874] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155881] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.155888] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.155898] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.155919] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.156134] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.156148] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.156156] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156162] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.156172] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.156189] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156198] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156205] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.156216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.156237] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.156453] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.156466] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.156474] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156480] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.156489] nvme_ctrlr.c:3736:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:45.658 [2024-04-15 02:00:31.156498] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.156511] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.156620] nvme_ctrlr.c:3929:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:45.658 [2024-04-15 02:00:31.156628] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.156655] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156663] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156669] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.156679] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.156700] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.156927] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.156943] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.156951] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156961] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.156971] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:45.658 [2024-04-15 02:00:31.156989] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.156998] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157005] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.157015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.157037] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.157255] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.157270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.157278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157285] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.157293] nvme_ctrlr.c:3771:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:45.658 [2024-04-15 02:00:31.157302] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:45.658 [2024-04-15 02:00:31.157316] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:45.658 [2024-04-15 02:00:31.157330] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:45.658 [2024-04-15 02:00:31.157343] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157357] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.157368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.658 [2024-04-15 02:00:31.157390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.157655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.658 [2024-04-15 02:00:31.157670] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.658 [2024-04-15 02:00:31.157677] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157684] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=4096, cccid=0 00:25:45.658 [2024-04-15 02:00:31.157692] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198c3e0) on tqpair(0x19215a0): expected_datao=0, payload_size=4096 00:25:45.658 [2024-04-15 02:00:31.157768] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.157778] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.198270] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.198278] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198285] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.198297] nvme_ctrlr.c:1971:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:45.658 [2024-04-15 02:00:31.198323] nvme_ctrlr.c:1975:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:45.658 [2024-04-15 02:00:31.198335] nvme_ctrlr.c:1978:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:45.658 [2024-04-15 02:00:31.198342] nvme_ctrlr.c:2002:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:45.658 [2024-04-15 02:00:31.198350] nvme_ctrlr.c:2017:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:45.658 [2024-04-15 02:00:31.198358] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:45.658 [2024-04-15 02:00:31.198372] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:45.658 [2024-04-15 02:00:31.198385] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.658 [2024-04-15 02:00:31.198411] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.658 [2024-04-15 02:00:31.198434] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.658 [2024-04-15 02:00:31.198634] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.658 [2024-04-15 02:00:31.198647] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.658 [2024-04-15 02:00:31.198654] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c3e0) on tqpair=0x19215a0 00:25:45.658 [2024-04-15 02:00:31.198672] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.658 [2024-04-15 02:00:31.198680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198686] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.198696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.659 [2024-04-15 02:00:31.198707] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198714] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198720] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.198729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.659 [2024-04-15 02:00:31.198738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198745] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198752] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.198760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.659 [2024-04-15 02:00:31.198770] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198777] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.198807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.659 [2024-04-15 02:00:31.198817] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.198835] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.198847] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198858] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.198864] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.198875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.659 [2024-04-15 02:00:31.198897] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c3e0, cid 0, qid 0 00:25:45.659 [2024-04-15 02:00:31.198922] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c540, cid 1, qid 0 00:25:45.659 [2024-04-15 02:00:31.198931] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c6a0, cid 2, qid 0 00:25:45.659 [2024-04-15 02:00:31.198939] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.659 [2024-04-15 02:00:31.198947] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.659 [2024-04-15 02:00:31.203077] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.659 [2024-04-15 02:00:31.203093] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.659 [2024-04-15 02:00:31.203101] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.659 [2024-04-15 02:00:31.203117] nvme_ctrlr.c:2889:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:45.659 [2024-04-15 02:00:31.203126] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.203140] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.203165] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.203176] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203183] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203190] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.203201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:45.659 [2024-04-15 02:00:31.203223] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.659 [2024-04-15 02:00:31.203457] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.659 [2024-04-15 02:00:31.203473] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.659 [2024-04-15 02:00:31.203480] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203487] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.659 [2024-04-15 02:00:31.203541] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.203559] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.203573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203580] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203587] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.203597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.659 [2024-04-15 02:00:31.203619] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.659 [2024-04-15 02:00:31.203935] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.659 [2024-04-15 02:00:31.203957] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.659 [2024-04-15 02:00:31.203965] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203972] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=4096, cccid=4 00:25:45.659 [2024-04-15 02:00:31.203980] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198c960) on tqpair(0x19215a0): expected_datao=0, payload_size=4096 00:25:45.659 [2024-04-15 02:00:31.203991] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.203999] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.659 [2024-04-15 02:00:31.204099] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.659 [2024-04-15 02:00:31.204106] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204113] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.659 [2024-04-15 02:00:31.204129] nvme_ctrlr.c:4542:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:45.659 [2024-04-15 02:00:31.204149] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.204167] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.204180] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204195] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.204205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.659 [2024-04-15 02:00:31.204227] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.659 [2024-04-15 02:00:31.204484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.659 [2024-04-15 02:00:31.204500] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.659 [2024-04-15 02:00:31.204507] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204513] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=4096, cccid=4 00:25:45.659 [2024-04-15 02:00:31.204521] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198c960) on tqpair(0x19215a0): expected_datao=0, payload_size=4096 00:25:45.659 [2024-04-15 02:00:31.204532] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204540] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.659 [2024-04-15 02:00:31.204643] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.659 [2024-04-15 02:00:31.204650] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204656] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.659 [2024-04-15 02:00:31.204678] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.204696] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.204709] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204717] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.204724] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.659 [2024-04-15 02:00:31.204738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.659 [2024-04-15 02:00:31.204760] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.659 [2024-04-15 02:00:31.204998] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.659 [2024-04-15 02:00:31.205014] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.659 [2024-04-15 02:00:31.205021] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.205028] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=4096, cccid=4 00:25:45.659 [2024-04-15 02:00:31.205036] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198c960) on tqpair(0x19215a0): expected_datao=0, payload_size=4096 00:25:45.659 [2024-04-15 02:00:31.205055] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.205065] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.205144] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.659 [2024-04-15 02:00:31.205156] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.659 [2024-04-15 02:00:31.205163] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.659 [2024-04-15 02:00:31.205169] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.659 [2024-04-15 02:00:31.205184] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:45.659 [2024-04-15 02:00:31.205199] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:45.660 [2024-04-15 02:00:31.205214] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:45.660 [2024-04-15 02:00:31.205225] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:45.660 [2024-04-15 02:00:31.205234] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:45.660 [2024-04-15 02:00:31.205243] nvme_ctrlr.c:2977:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:45.660 [2024-04-15 02:00:31.205250] nvme_ctrlr.c:1471:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:45.660 [2024-04-15 02:00:31.205259] nvme_ctrlr.c:1477:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:45.660 [2024-04-15 02:00:31.205278] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205287] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205294] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.205304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.205316] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205338] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.205354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:45.660 [2024-04-15 02:00:31.205379] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.660 [2024-04-15 02:00:31.205406] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cac0, cid 5, qid 0 00:25:45.660 [2024-04-15 02:00:31.205632] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.660 [2024-04-15 02:00:31.205648] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.660 [2024-04-15 02:00:31.205658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.660 [2024-04-15 02:00:31.205679] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.660 [2024-04-15 02:00:31.205688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.660 [2024-04-15 02:00:31.205695] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205702] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cac0) on tqpair=0x19215a0 00:25:45.660 [2024-04-15 02:00:31.205718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.205734] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.205745] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.205766] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cac0, cid 5, qid 0 00:25:45.660 [2024-04-15 02:00:31.206004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.660 [2024-04-15 02:00:31.206016] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.660 [2024-04-15 02:00:31.206023] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206030] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cac0) on tqpair=0x19215a0 00:25:45.660 [2024-04-15 02:00:31.206056] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206067] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206073] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206083] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cac0, cid 5, qid 0 00:25:45.660 [2024-04-15 02:00:31.206336] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.660 [2024-04-15 02:00:31.206352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.660 [2024-04-15 02:00:31.206359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206365] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cac0) on tqpair=0x19215a0 00:25:45.660 [2024-04-15 02:00:31.206383] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206392] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206399] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206409] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206435] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cac0, cid 5, qid 0 00:25:45.660 [2024-04-15 02:00:31.206674] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.660 [2024-04-15 02:00:31.206690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.660 [2024-04-15 02:00:31.206697] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206704] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cac0) on tqpair=0x19215a0 00:25:45.660 [2024-04-15 02:00:31.206725] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206741] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206768] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206776] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206792] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206804] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206818] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206863] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.206869] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19215a0) 00:25:45.660 [2024-04-15 02:00:31.206878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.660 [2024-04-15 02:00:31.206900] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cac0, cid 5, qid 0 00:25:45.660 [2024-04-15 02:00:31.206926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c960, cid 4, qid 0 00:25:45.660 [2024-04-15 02:00:31.206934] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cc20, cid 6, qid 0 00:25:45.660 [2024-04-15 02:00:31.206942] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cd80, cid 7, qid 0 00:25:45.660 [2024-04-15 02:00:31.211065] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.660 [2024-04-15 02:00:31.211082] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.660 [2024-04-15 02:00:31.211089] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.211095] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=8192, cccid=5 00:25:45.660 [2024-04-15 02:00:31.211103] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198cac0) on tqpair(0x19215a0): expected_datao=0, payload_size=8192 00:25:45.660 [2024-04-15 02:00:31.211114] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.211122] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.211130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.660 [2024-04-15 02:00:31.211139] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.660 [2024-04-15 02:00:31.211146] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.660 [2024-04-15 02:00:31.211152] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=512, cccid=4 00:25:45.661 [2024-04-15 02:00:31.211159] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198c960) on tqpair(0x19215a0): expected_datao=0, payload_size=512 00:25:45.661 [2024-04-15 02:00:31.211169] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211176] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211184] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.661 [2024-04-15 02:00:31.211193] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.661 [2024-04-15 02:00:31.211200] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211206] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=512, cccid=6 00:25:45.661 [2024-04-15 02:00:31.211217] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198cc20) on tqpair(0x19215a0): expected_datao=0, payload_size=512 00:25:45.661 [2024-04-15 02:00:31.211227] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211234] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211242] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:45.661 [2024-04-15 02:00:31.211251] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:45.661 [2024-04-15 02:00:31.211258] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211264] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x19215a0): datao=0, datal=4096, cccid=7 00:25:45.661 [2024-04-15 02:00:31.211271] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x198cd80) on tqpair(0x19215a0): expected_datao=0, payload_size=4096 00:25:45.661 [2024-04-15 02:00:31.211281] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211288] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211296] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.661 [2024-04-15 02:00:31.211305] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.661 [2024-04-15 02:00:31.211311] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211318] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cac0) on tqpair=0x19215a0 00:25:45.661 [2024-04-15 02:00:31.211353] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.661 [2024-04-15 02:00:31.211364] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.661 [2024-04-15 02:00:31.211370] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211377] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c960) on tqpair=0x19215a0 00:25:45.661 [2024-04-15 02:00:31.211391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.661 [2024-04-15 02:00:31.211401] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.661 [2024-04-15 02:00:31.211407] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211413] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cc20) on tqpair=0x19215a0 00:25:45.661 [2024-04-15 02:00:31.211424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.661 [2024-04-15 02:00:31.211434] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.661 [2024-04-15 02:00:31.211440] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.661 [2024-04-15 02:00:31.211446] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cd80) on tqpair=0x19215a0 00:25:45.661 ===================================================== 00:25:45.661 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:45.661 ===================================================== 00:25:45.661 Controller Capabilities/Features 00:25:45.661 ================================ 00:25:45.661 Vendor ID: 8086 00:25:45.661 Subsystem Vendor ID: 8086 00:25:45.661 Serial Number: SPDK00000000000001 00:25:45.661 Model Number: SPDK bdev Controller 00:25:45.661 Firmware Version: 24.01.1 00:25:45.661 Recommended Arb Burst: 6 00:25:45.661 IEEE OUI Identifier: e4 d2 5c 00:25:45.661 Multi-path I/O 00:25:45.661 May have multiple subsystem ports: Yes 00:25:45.661 May have multiple controllers: Yes 00:25:45.661 Associated with SR-IOV VF: No 00:25:45.661 Max Data Transfer Size: 131072 00:25:45.661 Max Number of Namespaces: 32 00:25:45.661 Max Number of I/O Queues: 127 00:25:45.661 NVMe Specification Version (VS): 1.3 00:25:45.661 NVMe Specification Version (Identify): 1.3 00:25:45.661 Maximum Queue Entries: 128 00:25:45.661 Contiguous Queues Required: Yes 00:25:45.661 Arbitration Mechanisms Supported 00:25:45.661 Weighted Round Robin: Not Supported 00:25:45.661 Vendor Specific: Not Supported 00:25:45.661 Reset Timeout: 15000 ms 00:25:45.661 Doorbell Stride: 4 bytes 00:25:45.661 NVM Subsystem Reset: Not Supported 00:25:45.661 Command Sets Supported 00:25:45.661 NVM Command Set: Supported 00:25:45.661 Boot Partition: Not Supported 00:25:45.661 Memory Page Size Minimum: 4096 bytes 00:25:45.661 Memory Page Size Maximum: 4096 bytes 00:25:45.661 Persistent Memory Region: Not Supported 00:25:45.661 Optional Asynchronous Events Supported 00:25:45.661 Namespace Attribute Notices: Supported 00:25:45.661 Firmware Activation Notices: Not Supported 00:25:45.661 ANA Change Notices: Not Supported 00:25:45.661 PLE Aggregate Log Change Notices: Not Supported 00:25:45.661 LBA Status Info Alert Notices: Not Supported 00:25:45.661 EGE Aggregate Log Change Notices: Not Supported 00:25:45.661 Normal NVM Subsystem Shutdown event: Not Supported 00:25:45.661 Zone Descriptor Change Notices: Not Supported 00:25:45.661 Discovery Log Change Notices: Not Supported 00:25:45.661 Controller Attributes 00:25:45.661 128-bit Host Identifier: Supported 00:25:45.661 Non-Operational Permissive Mode: Not Supported 00:25:45.661 NVM Sets: Not Supported 00:25:45.661 Read Recovery Levels: Not Supported 00:25:45.661 Endurance Groups: Not Supported 00:25:45.661 Predictable Latency Mode: Not Supported 00:25:45.661 Traffic Based Keep ALive: Not Supported 00:25:45.661 Namespace Granularity: Not Supported 00:25:45.661 SQ Associations: Not Supported 00:25:45.661 UUID List: Not Supported 00:25:45.661 Multi-Domain Subsystem: Not Supported 00:25:45.661 Fixed Capacity Management: Not Supported 00:25:45.661 Variable Capacity Management: Not Supported 00:25:45.661 Delete Endurance Group: Not Supported 00:25:45.661 Delete NVM Set: Not Supported 00:25:45.661 Extended LBA Formats Supported: Not Supported 00:25:45.661 Flexible Data Placement Supported: Not Supported 00:25:45.661 00:25:45.661 Controller Memory Buffer Support 00:25:45.661 ================================ 00:25:45.661 Supported: No 00:25:45.661 00:25:45.661 Persistent Memory Region Support 00:25:45.661 ================================ 00:25:45.661 Supported: No 00:25:45.661 00:25:45.661 Admin Command Set Attributes 00:25:45.661 ============================ 00:25:45.661 Security Send/Receive: Not Supported 00:25:45.661 Format NVM: Not Supported 00:25:45.661 Firmware Activate/Download: Not Supported 00:25:45.661 Namespace Management: Not Supported 00:25:45.661 Device Self-Test: Not Supported 00:25:45.661 Directives: Not Supported 00:25:45.661 NVMe-MI: Not Supported 00:25:45.661 Virtualization Management: Not Supported 00:25:45.661 Doorbell Buffer Config: Not Supported 00:25:45.661 Get LBA Status Capability: Not Supported 00:25:45.661 Command & Feature Lockdown Capability: Not Supported 00:25:45.661 Abort Command Limit: 4 00:25:45.661 Async Event Request Limit: 4 00:25:45.661 Number of Firmware Slots: N/A 00:25:45.661 Firmware Slot 1 Read-Only: N/A 00:25:45.661 Firmware Activation Without Reset: N/A 00:25:45.661 Multiple Update Detection Support: N/A 00:25:45.661 Firmware Update Granularity: No Information Provided 00:25:45.661 Per-Namespace SMART Log: No 00:25:45.661 Asymmetric Namespace Access Log Page: Not Supported 00:25:45.661 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:45.661 Command Effects Log Page: Supported 00:25:45.661 Get Log Page Extended Data: Supported 00:25:45.661 Telemetry Log Pages: Not Supported 00:25:45.661 Persistent Event Log Pages: Not Supported 00:25:45.661 Supported Log Pages Log Page: May Support 00:25:45.661 Commands Supported & Effects Log Page: Not Supported 00:25:45.661 Feature Identifiers & Effects Log Page:May Support 00:25:45.661 NVMe-MI Commands & Effects Log Page: May Support 00:25:45.661 Data Area 4 for Telemetry Log: Not Supported 00:25:45.661 Error Log Page Entries Supported: 128 00:25:45.661 Keep Alive: Supported 00:25:45.661 Keep Alive Granularity: 10000 ms 00:25:45.661 00:25:45.661 NVM Command Set Attributes 00:25:45.661 ========================== 00:25:45.661 Submission Queue Entry Size 00:25:45.661 Max: 64 00:25:45.661 Min: 64 00:25:45.661 Completion Queue Entry Size 00:25:45.661 Max: 16 00:25:45.661 Min: 16 00:25:45.661 Number of Namespaces: 32 00:25:45.661 Compare Command: Supported 00:25:45.661 Write Uncorrectable Command: Not Supported 00:25:45.661 Dataset Management Command: Supported 00:25:45.661 Write Zeroes Command: Supported 00:25:45.661 Set Features Save Field: Not Supported 00:25:45.661 Reservations: Supported 00:25:45.661 Timestamp: Not Supported 00:25:45.661 Copy: Supported 00:25:45.661 Volatile Write Cache: Present 00:25:45.661 Atomic Write Unit (Normal): 1 00:25:45.661 Atomic Write Unit (PFail): 1 00:25:45.661 Atomic Compare & Write Unit: 1 00:25:45.661 Fused Compare & Write: Supported 00:25:45.661 Scatter-Gather List 00:25:45.661 SGL Command Set: Supported 00:25:45.661 SGL Keyed: Supported 00:25:45.661 SGL Bit Bucket Descriptor: Not Supported 00:25:45.661 SGL Metadata Pointer: Not Supported 00:25:45.661 Oversized SGL: Not Supported 00:25:45.661 SGL Metadata Address: Not Supported 00:25:45.661 SGL Offset: Supported 00:25:45.662 Transport SGL Data Block: Not Supported 00:25:45.662 Replay Protected Memory Block: Not Supported 00:25:45.662 00:25:45.662 Firmware Slot Information 00:25:45.662 ========================= 00:25:45.662 Active slot: 1 00:25:45.662 Slot 1 Firmware Revision: 24.01.1 00:25:45.662 00:25:45.662 00:25:45.662 Commands Supported and Effects 00:25:45.662 ============================== 00:25:45.662 Admin Commands 00:25:45.662 -------------- 00:25:45.662 Get Log Page (02h): Supported 00:25:45.662 Identify (06h): Supported 00:25:45.662 Abort (08h): Supported 00:25:45.662 Set Features (09h): Supported 00:25:45.662 Get Features (0Ah): Supported 00:25:45.662 Asynchronous Event Request (0Ch): Supported 00:25:45.662 Keep Alive (18h): Supported 00:25:45.662 I/O Commands 00:25:45.662 ------------ 00:25:45.662 Flush (00h): Supported LBA-Change 00:25:45.662 Write (01h): Supported LBA-Change 00:25:45.662 Read (02h): Supported 00:25:45.662 Compare (05h): Supported 00:25:45.662 Write Zeroes (08h): Supported LBA-Change 00:25:45.662 Dataset Management (09h): Supported LBA-Change 00:25:45.662 Copy (19h): Supported LBA-Change 00:25:45.662 Unknown (79h): Supported LBA-Change 00:25:45.662 Unknown (7Ah): Supported 00:25:45.662 00:25:45.662 Error Log 00:25:45.662 ========= 00:25:45.662 00:25:45.662 Arbitration 00:25:45.662 =========== 00:25:45.662 Arbitration Burst: 1 00:25:45.662 00:25:45.662 Power Management 00:25:45.662 ================ 00:25:45.662 Number of Power States: 1 00:25:45.662 Current Power State: Power State #0 00:25:45.662 Power State #0: 00:25:45.662 Max Power: 0.00 W 00:25:45.662 Non-Operational State: Operational 00:25:45.662 Entry Latency: Not Reported 00:25:45.662 Exit Latency: Not Reported 00:25:45.662 Relative Read Throughput: 0 00:25:45.662 Relative Read Latency: 0 00:25:45.662 Relative Write Throughput: 0 00:25:45.662 Relative Write Latency: 0 00:25:45.662 Idle Power: Not Reported 00:25:45.662 Active Power: Not Reported 00:25:45.662 Non-Operational Permissive Mode: Not Supported 00:25:45.662 00:25:45.662 Health Information 00:25:45.662 ================== 00:25:45.662 Critical Warnings: 00:25:45.662 Available Spare Space: OK 00:25:45.662 Temperature: OK 00:25:45.662 Device Reliability: OK 00:25:45.662 Read Only: No 00:25:45.662 Volatile Memory Backup: OK 00:25:45.662 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:45.662 Temperature Threshold: [2024-04-15 02:00:31.211581] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.211593] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.211600] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.211611] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.211634] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198cd80, cid 7, qid 0 00:25:45.662 [2024-04-15 02:00:31.211892] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.211905] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.211913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.211920] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198cd80) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.211965] nvme_ctrlr.c:4206:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:45.662 [2024-04-15 02:00:31.211986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.662 [2024-04-15 02:00:31.212002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.662 [2024-04-15 02:00:31.212013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.662 [2024-04-15 02:00:31.212022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:45.662 [2024-04-15 02:00:31.212035] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212044] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.212093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.212131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.212374] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.212390] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.212397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.212416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212431] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.212442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.212468] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.212727] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.212740] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.212747] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212754] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.212763] nvme_ctrlr.c:1069:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:45.662 [2024-04-15 02:00:31.212771] nvme_ctrlr.c:1072:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:45.662 [2024-04-15 02:00:31.212786] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212796] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.212802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.212813] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.212833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.213056] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.213072] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.213080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213086] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.213105] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213121] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.213135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.213157] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.213391] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.213404] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.213411] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213417] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.213434] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213444] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213450] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.213461] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.213481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.213716] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.213729] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.213736] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213743] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.213760] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213769] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.213776] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.213786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.662 [2024-04-15 02:00:31.213806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.662 [2024-04-15 02:00:31.214040] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.662 [2024-04-15 02:00:31.214065] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.662 [2024-04-15 02:00:31.214073] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.214093] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.662 [2024-04-15 02:00:31.214112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.214121] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.662 [2024-04-15 02:00:31.214128] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.662 [2024-04-15 02:00:31.214138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.663 [2024-04-15 02:00:31.214159] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.663 [2024-04-15 02:00:31.214376] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.663 [2024-04-15 02:00:31.214391] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.663 [2024-04-15 02:00:31.214398] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.214405] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.663 [2024-04-15 02:00:31.214423] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.214432] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.214439] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.663 [2024-04-15 02:00:31.214449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.663 [2024-04-15 02:00:31.214474] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.663 [2024-04-15 02:00:31.218058] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.663 [2024-04-15 02:00:31.218076] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.663 [2024-04-15 02:00:31.218084] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.218091] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.663 [2024-04-15 02:00:31.218110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.218120] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.218127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x19215a0) 00:25:45.663 [2024-04-15 02:00:31.218138] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:45.663 [2024-04-15 02:00:31.218160] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x198c800, cid 3, qid 0 00:25:45.663 [2024-04-15 02:00:31.218403] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:45.663 [2024-04-15 02:00:31.218415] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:45.663 [2024-04-15 02:00:31.218423] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:45.663 [2024-04-15 02:00:31.218429] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x198c800) on tqpair=0x19215a0 00:25:45.663 [2024-04-15 02:00:31.218443] nvme_ctrlr.c:1191:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:25:45.663 0 Kelvin (-273 Celsius) 00:25:45.663 Available Spare: 0% 00:25:45.663 Available Spare Threshold: 0% 00:25:45.663 Life Percentage Used: 0% 00:25:45.663 Data Units Read: 0 00:25:45.663 Data Units Written: 0 00:25:45.663 Host Read Commands: 0 00:25:45.663 Host Write Commands: 0 00:25:45.663 Controller Busy Time: 0 minutes 00:25:45.663 Power Cycles: 0 00:25:45.663 Power On Hours: 0 hours 00:25:45.663 Unsafe Shutdowns: 0 00:25:45.663 Unrecoverable Media Errors: 0 00:25:45.663 Lifetime Error Log Entries: 0 00:25:45.663 Warning Temperature Time: 0 minutes 00:25:45.663 Critical Temperature Time: 0 minutes 00:25:45.663 00:25:45.663 Number of Queues 00:25:45.663 ================ 00:25:45.663 Number of I/O Submission Queues: 127 00:25:45.663 Number of I/O Completion Queues: 127 00:25:45.663 00:25:45.663 Active Namespaces 00:25:45.663 ================= 00:25:45.663 Namespace ID:1 00:25:45.663 Error Recovery Timeout: Unlimited 00:25:45.663 Command Set Identifier: NVM (00h) 00:25:45.663 Deallocate: Supported 00:25:45.663 Deallocated/Unwritten Error: Not Supported 00:25:45.663 Deallocated Read Value: Unknown 00:25:45.663 Deallocate in Write Zeroes: Not Supported 00:25:45.663 Deallocated Guard Field: 0xFFFF 00:25:45.663 Flush: Supported 00:25:45.663 Reservation: Supported 00:25:45.663 Namespace Sharing Capabilities: Multiple Controllers 00:25:45.663 Size (in LBAs): 131072 (0GiB) 00:25:45.663 Capacity (in LBAs): 131072 (0GiB) 00:25:45.663 Utilization (in LBAs): 131072 (0GiB) 00:25:45.663 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:45.663 EUI64: ABCDEF0123456789 00:25:45.663 UUID: 82a301cb-ba36-4967-9cd2-b41724eb3cdb 00:25:45.663 Thin Provisioning: Not Supported 00:25:45.663 Per-NS Atomic Units: Yes 00:25:45.663 Atomic Boundary Size (Normal): 0 00:25:45.663 Atomic Boundary Size (PFail): 0 00:25:45.663 Atomic Boundary Offset: 0 00:25:45.663 Maximum Single Source Range Length: 65535 00:25:45.663 Maximum Copy Length: 65535 00:25:45.663 Maximum Source Range Count: 1 00:25:45.663 NGUID/EUI64 Never Reused: No 00:25:45.663 Namespace Write Protected: No 00:25:45.663 Number of LBA Formats: 1 00:25:45.663 Current LBA Format: LBA Format #00 00:25:45.663 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:45.663 00:25:45.663 02:00:31 -- host/identify.sh@51 -- # sync 00:25:45.663 02:00:31 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:45.663 02:00:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:45.663 02:00:31 -- common/autotest_common.sh@10 -- # set +x 00:25:45.663 02:00:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:45.663 02:00:31 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:45.663 02:00:31 -- host/identify.sh@56 -- # nvmftestfini 00:25:45.663 02:00:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:45.663 02:00:31 -- nvmf/common.sh@116 -- # sync 00:25:45.663 02:00:31 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:45.663 02:00:31 -- nvmf/common.sh@119 -- # set +e 00:25:45.663 02:00:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:45.663 02:00:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:45.663 rmmod nvme_tcp 00:25:45.663 rmmod nvme_fabrics 00:25:45.663 rmmod nvme_keyring 00:25:45.663 02:00:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:45.663 02:00:31 -- nvmf/common.sh@123 -- # set -e 00:25:45.663 02:00:31 -- nvmf/common.sh@124 -- # return 0 00:25:45.663 02:00:31 -- nvmf/common.sh@477 -- # '[' -n 2244020 ']' 00:25:45.663 02:00:31 -- nvmf/common.sh@478 -- # killprocess 2244020 00:25:45.663 02:00:31 -- common/autotest_common.sh@926 -- # '[' -z 2244020 ']' 00:25:45.663 02:00:31 -- common/autotest_common.sh@930 -- # kill -0 2244020 00:25:45.663 02:00:31 -- common/autotest_common.sh@931 -- # uname 00:25:45.663 02:00:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:45.663 02:00:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2244020 00:25:45.922 02:00:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:45.922 02:00:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:45.922 02:00:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2244020' 00:25:45.922 killing process with pid 2244020 00:25:45.922 02:00:31 -- common/autotest_common.sh@945 -- # kill 2244020 00:25:45.922 [2024-04-15 02:00:31.316521] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:45.922 02:00:31 -- common/autotest_common.sh@950 -- # wait 2244020 00:25:46.180 02:00:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:46.180 02:00:31 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:46.180 02:00:31 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:46.180 02:00:31 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:46.180 02:00:31 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:46.180 02:00:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:46.180 02:00:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:46.180 02:00:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.081 02:00:33 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:25:48.081 00:25:48.081 real 0m6.108s 00:25:48.081 user 0m7.416s 00:25:48.081 sys 0m1.939s 00:25:48.081 02:00:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:48.081 02:00:33 -- common/autotest_common.sh@10 -- # set +x 00:25:48.081 ************************************ 00:25:48.081 END TEST nvmf_identify 00:25:48.081 ************************************ 00:25:48.081 02:00:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.081 02:00:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:48.081 02:00:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:48.081 02:00:33 -- common/autotest_common.sh@10 -- # set +x 00:25:48.081 ************************************ 00:25:48.081 START TEST nvmf_perf 00:25:48.081 ************************************ 00:25:48.081 02:00:33 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:48.081 * Looking for test storage... 00:25:48.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:48.081 02:00:33 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:48.081 02:00:33 -- nvmf/common.sh@7 -- # uname -s 00:25:48.081 02:00:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:48.081 02:00:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:48.081 02:00:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:48.081 02:00:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:48.081 02:00:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:48.081 02:00:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:48.081 02:00:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:48.081 02:00:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:48.081 02:00:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:48.081 02:00:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:48.081 02:00:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.081 02:00:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:48.081 02:00:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:48.081 02:00:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:48.081 02:00:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:48.081 02:00:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:48.081 02:00:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:48.081 02:00:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:48.081 02:00:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:48.081 02:00:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.081 02:00:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.081 02:00:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.081 02:00:33 -- paths/export.sh@5 -- # export PATH 00:25:48.081 02:00:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:48.081 02:00:33 -- nvmf/common.sh@46 -- # : 0 00:25:48.081 02:00:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:48.081 02:00:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:48.081 02:00:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:48.081 02:00:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:48.081 02:00:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:48.081 02:00:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:48.081 02:00:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:48.081 02:00:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:48.081 02:00:33 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:48.081 02:00:33 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:48.081 02:00:33 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:48.081 02:00:33 -- host/perf.sh@17 -- # nvmftestinit 00:25:48.081 02:00:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:48.082 02:00:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:48.082 02:00:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:48.082 02:00:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:48.082 02:00:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:48.082 02:00:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:48.082 02:00:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:48.082 02:00:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:48.082 02:00:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:48.082 02:00:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:48.082 02:00:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:48.082 02:00:33 -- common/autotest_common.sh@10 -- # set +x 00:25:50.610 02:00:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:50.610 02:00:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:50.610 02:00:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:50.610 02:00:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:50.610 02:00:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:50.610 02:00:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:50.610 02:00:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:50.610 02:00:35 -- nvmf/common.sh@294 -- # net_devs=() 00:25:50.610 02:00:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:50.610 02:00:35 -- nvmf/common.sh@295 -- # e810=() 00:25:50.610 02:00:35 -- nvmf/common.sh@295 -- # local -ga e810 00:25:50.610 02:00:35 -- nvmf/common.sh@296 -- # x722=() 00:25:50.610 02:00:35 -- nvmf/common.sh@296 -- # local -ga x722 00:25:50.611 02:00:35 -- nvmf/common.sh@297 -- # mlx=() 00:25:50.611 02:00:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:50.611 02:00:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:50.611 02:00:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.611 02:00:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:50.611 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:50.611 02:00:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:50.611 02:00:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:50.611 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:50.611 02:00:35 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.611 02:00:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.611 02:00:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.611 02:00:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:50.611 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:50.611 02:00:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:50.611 02:00:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:50.611 02:00:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:50.611 02:00:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:50.611 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:50.611 02:00:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:50.611 02:00:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:25:50.611 02:00:35 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:50.611 02:00:35 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:50.611 02:00:35 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:25:50.611 02:00:35 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:50.611 02:00:35 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:50.611 02:00:35 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:25:50.611 02:00:35 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:50.611 02:00:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:50.611 02:00:35 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:25:50.611 02:00:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:25:50.611 02:00:35 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:25:50.611 02:00:35 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:50.611 02:00:35 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:50.611 02:00:35 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:50.611 02:00:35 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:25:50.611 02:00:35 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:50.611 02:00:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:50.611 02:00:35 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:50.611 02:00:35 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:25:50.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:50.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:25:50.611 00:25:50.611 --- 10.0.0.2 ping statistics --- 00:25:50.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.611 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:25:50.611 02:00:35 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:50.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:50.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:25:50.611 00:25:50.611 --- 10.0.0.1 ping statistics --- 00:25:50.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:50.611 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:25:50.611 02:00:35 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:50.611 02:00:35 -- nvmf/common.sh@410 -- # return 0 00:25:50.611 02:00:35 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:50.611 02:00:35 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:50.611 02:00:35 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:50.611 02:00:35 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:50.611 02:00:35 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:50.611 02:00:35 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:50.611 02:00:35 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:50.611 02:00:35 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:50.611 02:00:35 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:50.611 02:00:35 -- common/autotest_common.sh@10 -- # set +x 00:25:50.611 02:00:35 -- nvmf/common.sh@469 -- # nvmfpid=2246245 00:25:50.611 02:00:35 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:50.611 02:00:35 -- nvmf/common.sh@470 -- # waitforlisten 2246245 00:25:50.611 02:00:35 -- common/autotest_common.sh@819 -- # '[' -z 2246245 ']' 00:25:50.611 02:00:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.611 02:00:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:50.611 02:00:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.611 02:00:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:50.611 02:00:35 -- common/autotest_common.sh@10 -- # set +x 00:25:50.611 [2024-04-15 02:00:35.911055] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:25:50.611 [2024-04-15 02:00:35.911140] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.611 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.611 [2024-04-15 02:00:35.991379] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:50.611 [2024-04-15 02:00:36.080306] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:50.611 [2024-04-15 02:00:36.080467] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.611 [2024-04-15 02:00:36.080492] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.611 [2024-04-15 02:00:36.080507] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.611 [2024-04-15 02:00:36.080657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:50.611 [2024-04-15 02:00:36.080723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:50.611 [2024-04-15 02:00:36.080788] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.611 [2024-04-15 02:00:36.080795] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:51.575 02:00:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:51.575 02:00:36 -- common/autotest_common.sh@852 -- # return 0 00:25:51.575 02:00:36 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:51.575 02:00:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:51.575 02:00:36 -- common/autotest_common.sh@10 -- # set +x 00:25:51.575 02:00:36 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:51.575 02:00:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:51.575 02:00:36 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:54.856 02:00:39 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:54.856 02:00:39 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:54.856 02:00:40 -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:25:54.856 02:00:40 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:54.856 02:00:40 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:54.856 02:00:40 -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:25:54.856 02:00:40 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:54.856 02:00:40 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:54.856 02:00:40 -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:55.114 [2024-04-15 02:00:40.694802] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.114 02:00:40 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:55.371 02:00:40 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:55.371 02:00:40 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:55.629 02:00:41 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:55.629 02:00:41 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:55.887 02:00:41 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.145 [2024-04-15 02:00:41.682607] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.145 02:00:41 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:56.403 02:00:41 -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:25:56.403 02:00:41 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:56.403 02:00:41 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:56.403 02:00:41 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:25:57.775 Initializing NVMe Controllers 00:25:57.775 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:25:57.775 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:25:57.775 Initialization complete. Launching workers. 00:25:57.775 ======================================================== 00:25:57.776 Latency(us) 00:25:57.776 Device Information : IOPS MiB/s Average min max 00:25:57.776 PCIE (0000:88:00.0) NSID 1 from core 0: 86027.29 336.04 371.43 37.41 4316.27 00:25:57.776 ======================================================== 00:25:57.776 Total : 86027.29 336.04 371.43 37.41 4316.27 00:25:57.776 00:25:57.776 02:00:43 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:57.776 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.149 Initializing NVMe Controllers 00:25:59.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:59.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:59.149 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:59.149 Initialization complete. Launching workers. 00:25:59.149 ======================================================== 00:25:59.149 Latency(us) 00:25:59.149 Device Information : IOPS MiB/s Average min max 00:25:59.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10395.61 393.76 46103.62 00:25:59.149 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 48.00 0.19 21259.40 7909.83 47899.11 00:25:59.149 ======================================================== 00:25:59.149 Total : 148.00 0.58 13919.00 393.76 47899.11 00:25:59.149 00:25:59.149 02:00:44 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:59.149 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.083 Initializing NVMe Controllers 00:26:00.083 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:00.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:00.083 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:00.083 Initialization complete. Launching workers. 00:26:00.083 ======================================================== 00:26:00.083 Latency(us) 00:26:00.083 Device Information : IOPS MiB/s Average min max 00:26:00.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7445.52 29.08 4344.32 788.71 47266.27 00:26:00.083 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3870.63 15.12 8287.05 5650.75 19147.09 00:26:00.083 ======================================================== 00:26:00.083 Total : 11316.15 44.20 5692.91 788.71 47266.27 00:26:00.083 00:26:00.083 02:00:45 -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:26:00.083 02:00:45 -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:26:00.083 02:00:45 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:00.083 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.364 Initializing NVMe Controllers 00:26:03.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.364 Controller IO queue size 128, less than required. 00:26:03.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.364 Controller IO queue size 128, less than required. 00:26:03.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:03.364 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:03.364 Initialization complete. Launching workers. 00:26:03.364 ======================================================== 00:26:03.364 Latency(us) 00:26:03.364 Device Information : IOPS MiB/s Average min max 00:26:03.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 695.49 173.87 190405.06 97366.38 255848.41 00:26:03.364 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 580.00 145.00 232121.03 86708.17 382037.70 00:26:03.364 ======================================================== 00:26:03.364 Total : 1275.49 318.87 209374.30 86708.17 382037.70 00:26:03.364 00:26:03.364 02:00:48 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:03.364 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.364 No valid NVMe controllers or AIO or URING devices found 00:26:03.364 Initializing NVMe Controllers 00:26:03.364 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:03.364 Controller IO queue size 128, less than required. 00:26:03.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.364 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:03.364 Controller IO queue size 128, less than required. 00:26:03.364 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:03.364 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:26:03.364 WARNING: Some requested NVMe devices were skipped 00:26:03.364 02:00:48 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:03.364 EAL: No free 2048 kB hugepages reported on node 1 00:26:05.894 Initializing NVMe Controllers 00:26:05.894 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:05.894 Controller IO queue size 128, less than required. 00:26:05.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.894 Controller IO queue size 128, less than required. 00:26:05.894 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:05.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:05.894 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:05.894 Initialization complete. Launching workers. 00:26:05.894 00:26:05.894 ==================== 00:26:05.894 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:05.894 TCP transport: 00:26:05.894 polls: 28652 00:26:05.894 idle_polls: 10773 00:26:05.894 sock_completions: 17879 00:26:05.894 nvme_completions: 2437 00:26:05.894 submitted_requests: 3789 00:26:05.894 queued_requests: 1 00:26:05.894 00:26:05.894 ==================== 00:26:05.894 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:05.894 TCP transport: 00:26:05.894 polls: 26344 00:26:05.894 idle_polls: 11236 00:26:05.894 sock_completions: 15108 00:26:05.894 nvme_completions: 2703 00:26:05.894 submitted_requests: 4215 00:26:05.894 queued_requests: 1 00:26:05.894 ======================================================== 00:26:05.894 Latency(us) 00:26:05.895 Device Information : IOPS MiB/s Average min max 00:26:05.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 671.94 167.98 200720.42 96052.42 287906.54 00:26:05.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 737.83 184.46 176570.31 57629.17 284149.65 00:26:05.895 ======================================================== 00:26:05.895 Total : 1409.77 352.44 188080.95 57629.17 287906.54 00:26:05.895 00:26:05.895 02:00:51 -- host/perf.sh@66 -- # sync 00:26:05.895 02:00:51 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:05.895 02:00:51 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:26:05.895 02:00:51 -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:26:05.895 02:00:51 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:26:09.167 02:00:54 -- host/perf.sh@72 -- # ls_guid=d339c3da-cc12-4559-b5fb-2828aa5a3c1f 00:26:09.167 02:00:54 -- host/perf.sh@73 -- # get_lvs_free_mb d339c3da-cc12-4559-b5fb-2828aa5a3c1f 00:26:09.167 02:00:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=d339c3da-cc12-4559-b5fb-2828aa5a3c1f 00:26:09.167 02:00:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:09.167 02:00:54 -- common/autotest_common.sh@1345 -- # local fc 00:26:09.167 02:00:54 -- common/autotest_common.sh@1346 -- # local cs 00:26:09.167 02:00:54 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:09.167 02:00:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:09.167 { 00:26:09.167 "uuid": "d339c3da-cc12-4559-b5fb-2828aa5a3c1f", 00:26:09.167 "name": "lvs_0", 00:26:09.167 "base_bdev": "Nvme0n1", 00:26:09.167 "total_data_clusters": 238234, 00:26:09.167 "free_clusters": 238234, 00:26:09.167 "block_size": 512, 00:26:09.167 "cluster_size": 4194304 00:26:09.167 } 00:26:09.167 ]' 00:26:09.167 02:00:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="d339c3da-cc12-4559-b5fb-2828aa5a3c1f") .free_clusters' 00:26:09.425 02:00:54 -- common/autotest_common.sh@1348 -- # fc=238234 00:26:09.425 02:00:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="d339c3da-cc12-4559-b5fb-2828aa5a3c1f") .cluster_size' 00:26:09.425 02:00:54 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:09.425 02:00:54 -- common/autotest_common.sh@1352 -- # free_mb=952936 00:26:09.425 02:00:54 -- common/autotest_common.sh@1353 -- # echo 952936 00:26:09.425 952936 00:26:09.425 02:00:54 -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:26:09.425 02:00:54 -- host/perf.sh@78 -- # free_mb=20480 00:26:09.425 02:00:54 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u d339c3da-cc12-4559-b5fb-2828aa5a3c1f lbd_0 20480 00:26:09.989 02:00:55 -- host/perf.sh@80 -- # lb_guid=51632116-2d16-41c4-a61c-9d118010b626 00:26:09.989 02:00:55 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 51632116-2d16-41c4-a61c-9d118010b626 lvs_n_0 00:26:10.921 02:00:56 -- host/perf.sh@83 -- # ls_nested_guid=4fb93997-982c-4d5a-a188-79cec3f0db59 00:26:10.921 02:00:56 -- host/perf.sh@84 -- # get_lvs_free_mb 4fb93997-982c-4d5a-a188-79cec3f0db59 00:26:10.921 02:00:56 -- common/autotest_common.sh@1343 -- # local lvs_uuid=4fb93997-982c-4d5a-a188-79cec3f0db59 00:26:10.921 02:00:56 -- common/autotest_common.sh@1344 -- # local lvs_info 00:26:10.921 02:00:56 -- common/autotest_common.sh@1345 -- # local fc 00:26:10.921 02:00:56 -- common/autotest_common.sh@1346 -- # local cs 00:26:10.921 02:00:56 -- common/autotest_common.sh@1347 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:26:10.921 02:00:56 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:26:10.921 { 00:26:10.921 "uuid": "d339c3da-cc12-4559-b5fb-2828aa5a3c1f", 00:26:10.921 "name": "lvs_0", 00:26:10.921 "base_bdev": "Nvme0n1", 00:26:10.921 "total_data_clusters": 238234, 00:26:10.921 "free_clusters": 233114, 00:26:10.921 "block_size": 512, 00:26:10.921 "cluster_size": 4194304 00:26:10.921 }, 00:26:10.921 { 00:26:10.921 "uuid": "4fb93997-982c-4d5a-a188-79cec3f0db59", 00:26:10.921 "name": "lvs_n_0", 00:26:10.921 "base_bdev": "51632116-2d16-41c4-a61c-9d118010b626", 00:26:10.921 "total_data_clusters": 5114, 00:26:10.921 "free_clusters": 5114, 00:26:10.921 "block_size": 512, 00:26:10.921 "cluster_size": 4194304 00:26:10.921 } 00:26:10.921 ]' 00:26:10.921 02:00:56 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="4fb93997-982c-4d5a-a188-79cec3f0db59") .free_clusters' 00:26:10.921 02:00:56 -- common/autotest_common.sh@1348 -- # fc=5114 00:26:10.921 02:00:56 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="4fb93997-982c-4d5a-a188-79cec3f0db59") .cluster_size' 00:26:11.178 02:00:56 -- common/autotest_common.sh@1349 -- # cs=4194304 00:26:11.178 02:00:56 -- common/autotest_common.sh@1352 -- # free_mb=20456 00:26:11.178 02:00:56 -- common/autotest_common.sh@1353 -- # echo 20456 00:26:11.178 20456 00:26:11.178 02:00:56 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:26:11.178 02:00:56 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4fb93997-982c-4d5a-a188-79cec3f0db59 lbd_nest_0 20456 00:26:11.436 02:00:56 -- host/perf.sh@88 -- # lb_nested_guid=3634883b-7c30-4c82-9f54-552f7b0140d4 00:26:11.436 02:00:56 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:11.436 02:00:57 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:26:11.436 02:00:57 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 3634883b-7c30-4c82-9f54-552f7b0140d4 00:26:11.693 02:00:57 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:11.977 02:00:57 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:26:11.977 02:00:57 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:26:11.977 02:00:57 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:11.977 02:00:57 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:11.977 02:00:57 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:11.977 EAL: No free 2048 kB hugepages reported on node 1 00:26:24.169 Initializing NVMe Controllers 00:26:24.169 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:24.169 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:24.169 Initialization complete. Launching workers. 00:26:24.169 ======================================================== 00:26:24.169 Latency(us) 00:26:24.169 Device Information : IOPS MiB/s Average min max 00:26:24.169 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 44.39 0.02 22527.67 283.94 46092.85 00:26:24.169 ======================================================== 00:26:24.169 Total : 44.39 0.02 22527.67 283.94 46092.85 00:26:24.169 00:26:24.169 02:01:07 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:24.169 02:01:07 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:24.169 EAL: No free 2048 kB hugepages reported on node 1 00:26:34.131 Initializing NVMe Controllers 00:26:34.131 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.131 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.131 Initialization complete. Launching workers. 00:26:34.131 ======================================================== 00:26:34.131 Latency(us) 00:26:34.131 Device Information : IOPS MiB/s Average min max 00:26:34.131 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 84.27 10.53 11875.53 5038.62 47886.66 00:26:34.131 ======================================================== 00:26:34.131 Total : 84.27 10.53 11875.53 5038.62 47886.66 00:26:34.131 00:26:34.131 02:01:18 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:34.131 02:01:18 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:34.131 02:01:18 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.131 EAL: No free 2048 kB hugepages reported on node 1 00:26:44.097 Initializing NVMe Controllers 00:26:44.097 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:44.097 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:44.097 Initialization complete. Launching workers. 00:26:44.097 ======================================================== 00:26:44.097 Latency(us) 00:26:44.097 Device Information : IOPS MiB/s Average min max 00:26:44.097 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6785.10 3.31 4728.03 353.77 47896.64 00:26:44.097 ======================================================== 00:26:44.097 Total : 6785.10 3.31 4728.03 353.77 47896.64 00:26:44.097 00:26:44.097 02:01:28 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:44.097 02:01:28 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:44.097 EAL: No free 2048 kB hugepages reported on node 1 00:26:54.067 Initializing NVMe Controllers 00:26:54.067 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:54.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:54.067 Initialization complete. Launching workers. 00:26:54.067 ======================================================== 00:26:54.067 Latency(us) 00:26:54.067 Device Information : IOPS MiB/s Average min max 00:26:54.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1361.00 170.12 23546.81 3465.09 49088.85 00:26:54.067 ======================================================== 00:26:54.067 Total : 1361.00 170.12 23546.81 3465.09 49088.85 00:26:54.067 00:26:54.067 02:01:38 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:54.067 02:01:38 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:54.067 02:01:38 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:54.067 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.063 Initializing NVMe Controllers 00:27:04.063 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:04.063 Controller IO queue size 128, less than required. 00:27:04.063 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:04.063 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:04.063 Initialization complete. Launching workers. 00:27:04.063 ======================================================== 00:27:04.063 Latency(us) 00:27:04.063 Device Information : IOPS MiB/s Average min max 00:27:04.063 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12027.83 5.87 10641.72 1720.21 23178.88 00:27:04.063 ======================================================== 00:27:04.063 Total : 12027.83 5.87 10641.72 1720.21 23178.88 00:27:04.063 00:27:04.063 02:01:49 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:27:04.063 02:01:49 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:04.063 EAL: No free 2048 kB hugepages reported on node 1 00:27:14.029 Initializing NVMe Controllers 00:27:14.030 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:14.030 Controller IO queue size 128, less than required. 00:27:14.030 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:14.030 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:14.030 Initialization complete. Launching workers. 00:27:14.030 ======================================================== 00:27:14.030 Latency(us) 00:27:14.030 Device Information : IOPS MiB/s Average min max 00:27:14.030 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1186.51 148.31 108275.35 23085.57 215478.60 00:27:14.030 ======================================================== 00:27:14.030 Total : 1186.51 148.31 108275.35 23085.57 215478.60 00:27:14.030 00:27:14.030 02:01:59 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.287 02:01:59 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3634883b-7c30-4c82-9f54-552f7b0140d4 00:27:15.218 02:02:00 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:15.218 02:02:00 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 51632116-2d16-41c4-a61c-9d118010b626 00:27:15.782 02:02:01 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:15.782 02:02:01 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:15.783 02:02:01 -- host/perf.sh@114 -- # nvmftestfini 00:27:15.783 02:02:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:15.783 02:02:01 -- nvmf/common.sh@116 -- # sync 00:27:15.783 02:02:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:15.783 02:02:01 -- nvmf/common.sh@119 -- # set +e 00:27:15.783 02:02:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:15.783 02:02:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:15.783 rmmod nvme_tcp 00:27:15.783 rmmod nvme_fabrics 00:27:16.041 rmmod nvme_keyring 00:27:16.041 02:02:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:16.041 02:02:01 -- nvmf/common.sh@123 -- # set -e 00:27:16.041 02:02:01 -- nvmf/common.sh@124 -- # return 0 00:27:16.041 02:02:01 -- nvmf/common.sh@477 -- # '[' -n 2246245 ']' 00:27:16.041 02:02:01 -- nvmf/common.sh@478 -- # killprocess 2246245 00:27:16.041 02:02:01 -- common/autotest_common.sh@926 -- # '[' -z 2246245 ']' 00:27:16.041 02:02:01 -- common/autotest_common.sh@930 -- # kill -0 2246245 00:27:16.041 02:02:01 -- common/autotest_common.sh@931 -- # uname 00:27:16.041 02:02:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:16.041 02:02:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2246245 00:27:16.041 02:02:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:16.041 02:02:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:16.041 02:02:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2246245' 00:27:16.041 killing process with pid 2246245 00:27:16.041 02:02:01 -- common/autotest_common.sh@945 -- # kill 2246245 00:27:16.041 02:02:01 -- common/autotest_common.sh@950 -- # wait 2246245 00:27:17.937 02:02:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:17.937 02:02:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:17.937 02:02:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:17.937 02:02:03 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.937 02:02:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:17.937 02:02:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.937 02:02:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.937 02:02:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.844 02:02:05 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:19.844 00:27:19.844 real 1m31.457s 00:27:19.844 user 5m32.174s 00:27:19.844 sys 0m14.715s 00:27:19.844 02:02:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.844 02:02:05 -- common/autotest_common.sh@10 -- # set +x 00:27:19.844 ************************************ 00:27:19.844 END TEST nvmf_perf 00:27:19.844 ************************************ 00:27:19.844 02:02:05 -- nvmf/nvmf.sh@98 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:19.844 02:02:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:19.844 02:02:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:19.844 02:02:05 -- common/autotest_common.sh@10 -- # set +x 00:27:19.844 ************************************ 00:27:19.844 START TEST nvmf_fio_host 00:27:19.844 ************************************ 00:27:19.844 02:02:05 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:27:19.844 * Looking for test storage... 00:27:19.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.844 02:02:05 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.844 02:02:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.844 02:02:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.844 02:02:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.844 02:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.844 02:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.844 02:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- paths/export.sh@5 -- # export PATH 00:27:19.845 02:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.845 02:02:05 -- nvmf/common.sh@7 -- # uname -s 00:27:19.845 02:02:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.845 02:02:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.845 02:02:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.845 02:02:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.845 02:02:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.845 02:02:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.845 02:02:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.845 02:02:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.845 02:02:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.845 02:02:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.845 02:02:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:19.845 02:02:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:19.845 02:02:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.845 02:02:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.845 02:02:05 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.845 02:02:05 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.845 02:02:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.845 02:02:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.845 02:02:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.845 02:02:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- paths/export.sh@5 -- # export PATH 00:27:19.845 02:02:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.845 02:02:05 -- nvmf/common.sh@46 -- # : 0 00:27:19.845 02:02:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:19.845 02:02:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:19.845 02:02:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:19.845 02:02:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.845 02:02:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.845 02:02:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:19.845 02:02:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:19.845 02:02:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:19.845 02:02:05 -- host/fio.sh@12 -- # nvmftestinit 00:27:19.845 02:02:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:19.845 02:02:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.845 02:02:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:19.845 02:02:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:19.845 02:02:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:19.845 02:02:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.845 02:02:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.845 02:02:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.845 02:02:05 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:19.845 02:02:05 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:19.845 02:02:05 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:19.845 02:02:05 -- common/autotest_common.sh@10 -- # set +x 00:27:21.748 02:02:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:21.748 02:02:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:21.748 02:02:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:21.748 02:02:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:21.748 02:02:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:21.748 02:02:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:21.748 02:02:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:21.748 02:02:07 -- nvmf/common.sh@294 -- # net_devs=() 00:27:21.748 02:02:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:21.748 02:02:07 -- nvmf/common.sh@295 -- # e810=() 00:27:21.748 02:02:07 -- nvmf/common.sh@295 -- # local -ga e810 00:27:21.748 02:02:07 -- nvmf/common.sh@296 -- # x722=() 00:27:21.748 02:02:07 -- nvmf/common.sh@296 -- # local -ga x722 00:27:21.748 02:02:07 -- nvmf/common.sh@297 -- # mlx=() 00:27:21.748 02:02:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:21.748 02:02:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:21.748 02:02:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.748 02:02:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:21.748 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:21.748 02:02:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:21.748 02:02:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:21.748 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:21.748 02:02:07 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.748 02:02:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.748 02:02:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.748 02:02:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:21.748 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:21.748 02:02:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:21.748 02:02:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:21.748 02:02:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:21.748 02:02:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:21.748 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:21.748 02:02:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:21.748 02:02:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:21.748 02:02:07 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:21.748 02:02:07 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:21.748 02:02:07 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:21.748 02:02:07 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:21.748 02:02:07 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:21.748 02:02:07 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:21.748 02:02:07 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:21.748 02:02:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:21.748 02:02:07 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:21.748 02:02:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:21.748 02:02:07 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:21.748 02:02:07 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:21.748 02:02:07 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:21.748 02:02:07 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:21.748 02:02:07 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:21.748 02:02:07 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:21.748 02:02:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:21.748 02:02:07 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:21.748 02:02:07 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:21.748 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:21.748 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:27:21.748 00:27:21.748 --- 10.0.0.2 ping statistics --- 00:27:21.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.748 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:27:21.748 02:02:07 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:21.748 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:21.748 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:27:21.748 00:27:21.748 --- 10.0.0.1 ping statistics --- 00:27:21.748 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:21.748 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:21.748 02:02:07 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:21.748 02:02:07 -- nvmf/common.sh@410 -- # return 0 00:27:21.748 02:02:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:21.748 02:02:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:21.748 02:02:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:21.748 02:02:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:21.748 02:02:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:21.748 02:02:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:21.748 02:02:07 -- host/fio.sh@14 -- # [[ y != y ]] 00:27:21.748 02:02:07 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:27:21.748 02:02:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:21.748 02:02:07 -- common/autotest_common.sh@10 -- # set +x 00:27:21.748 02:02:07 -- host/fio.sh@22 -- # nvmfpid=2258561 00:27:21.748 02:02:07 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:21.748 02:02:07 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:21.748 02:02:07 -- host/fio.sh@26 -- # waitforlisten 2258561 00:27:21.748 02:02:07 -- common/autotest_common.sh@819 -- # '[' -z 2258561 ']' 00:27:21.748 02:02:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:21.748 02:02:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:21.748 02:02:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:21.748 02:02:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:21.748 02:02:07 -- common/autotest_common.sh@10 -- # set +x 00:27:21.748 [2024-04-15 02:02:07.339911] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:21.748 [2024-04-15 02:02:07.339996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:21.748 EAL: No free 2048 kB hugepages reported on node 1 00:27:22.006 [2024-04-15 02:02:07.413566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:22.006 [2024-04-15 02:02:07.505247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:22.006 [2024-04-15 02:02:07.505424] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.006 [2024-04-15 02:02:07.505448] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.006 [2024-04-15 02:02:07.505465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.006 [2024-04-15 02:02:07.505547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:22.006 [2024-04-15 02:02:07.505608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.006 [2024-04-15 02:02:07.505673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:22.006 [2024-04-15 02:02:07.505679] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:22.940 02:02:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:22.940 02:02:08 -- common/autotest_common.sh@852 -- # return 0 00:27:22.940 02:02:08 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 [2024-04-15 02:02:08.315572] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:27:22.940 02:02:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 02:02:08 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 Malloc1 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 [2024-04-15 02:02:08.392722] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:22.940 02:02:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:22.940 02:02:08 -- common/autotest_common.sh@10 -- # set +x 00:27:22.940 02:02:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:22.940 02:02:08 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:22.940 02:02:08 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.940 02:02:08 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:22.940 02:02:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:22.940 02:02:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:22.940 02:02:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:22.940 02:02:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.940 02:02:08 -- common/autotest_common.sh@1320 -- # shift 00:27:22.940 02:02:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:22.940 02:02:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.940 02:02:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.940 02:02:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:22.940 02:02:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:22.940 02:02:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:22.940 02:02:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:22.940 02:02:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:23.198 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:23.198 fio-3.35 00:27:23.198 Starting 1 thread 00:27:23.198 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.764 00:27:25.764 test: (groupid=0, jobs=1): err= 0: pid=2258911: Mon Apr 15 02:02:10 2024 00:27:25.764 read: IOPS=9390, BW=36.7MiB/s (38.5MB/s)(73.5MiB/2005msec) 00:27:25.764 slat (nsec): min=1996, max=112587, avg=2518.29, stdev=1557.64 00:27:25.764 clat (usec): min=4506, max=12228, avg=7734.12, stdev=945.19 00:27:25.764 lat (usec): min=4509, max=12231, avg=7736.64, stdev=945.21 00:27:25.764 clat percentiles (usec): 00:27:25.764 | 1.00th=[ 5735], 5.00th=[ 6325], 10.00th=[ 6652], 20.00th=[ 6980], 00:27:25.764 | 30.00th=[ 7242], 40.00th=[ 7439], 50.00th=[ 7701], 60.00th=[ 7898], 00:27:25.764 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9503], 00:27:25.764 | 99.00th=[10421], 99.50th=[11076], 99.90th=[11469], 99.95th=[11600], 00:27:25.764 | 99.99th=[12256] 00:27:25.764 bw ( KiB/s): min=36664, max=38040, per=99.81%, avg=37488.00, stdev=641.83, samples=4 00:27:25.764 iops : min= 9166, max= 9510, avg=9372.00, stdev=160.46, samples=4 00:27:25.764 write: IOPS=9392, BW=36.7MiB/s (38.5MB/s)(73.6MiB/2005msec); 0 zone resets 00:27:25.764 slat (nsec): min=2074, max=88233, avg=2633.76, stdev=1389.22 00:27:25.764 clat (usec): min=2355, max=10214, avg=5858.82, stdev=769.96 00:27:25.764 lat (usec): min=2362, max=10217, avg=5861.46, stdev=769.94 00:27:25.764 clat percentiles (usec): 00:27:25.764 | 1.00th=[ 3884], 5.00th=[ 4490], 10.00th=[ 4883], 20.00th=[ 5276], 00:27:25.764 | 30.00th=[ 5538], 40.00th=[ 5735], 50.00th=[ 5932], 60.00th=[ 6128], 00:27:25.764 | 70.00th=[ 6259], 80.00th=[ 6456], 90.00th=[ 6783], 95.00th=[ 6980], 00:27:25.764 | 99.00th=[ 7570], 99.50th=[ 7898], 99.90th=[ 8848], 99.95th=[ 9372], 00:27:25.764 | 99.99th=[ 9896] 00:27:25.764 bw ( KiB/s): min=37240, max=38040, per=100.00%, avg=37574.00, stdev=348.49, samples=4 00:27:25.764 iops : min= 9310, max= 9510, avg=9393.50, stdev=87.12, samples=4 00:27:25.764 lat (msec) : 4=0.72%, 10=98.40%, 20=0.89% 00:27:25.764 cpu : usr=52.54%, sys=35.18%, ctx=59, majf=0, minf=6 00:27:25.764 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:25.764 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.764 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:25.764 issued rwts: total=18827,18832,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.764 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:25.764 00:27:25.764 Run status group 0 (all jobs): 00:27:25.764 READ: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.5MiB (77.1MB), run=2005-2005msec 00:27:25.764 WRITE: bw=36.7MiB/s (38.5MB/s), 36.7MiB/s-36.7MiB/s (38.5MB/s-38.5MB/s), io=73.6MiB (77.1MB), run=2005-2005msec 00:27:25.764 02:02:10 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:25.764 02:02:10 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:25.764 02:02:10 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:25.764 02:02:10 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.764 02:02:10 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:25.764 02:02:10 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.764 02:02:10 -- common/autotest_common.sh@1320 -- # shift 00:27:25.764 02:02:10 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:25.764 02:02:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.764 02:02:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.764 02:02:10 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:25.764 02:02:10 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:25.764 02:02:10 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:25.764 02:02:10 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:25.764 02:02:10 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:27:25.764 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:25.764 fio-3.35 00:27:25.764 Starting 1 thread 00:27:25.764 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.315 00:27:28.315 test: (groupid=0, jobs=1): err= 0: pid=2259255: Mon Apr 15 02:02:13 2024 00:27:28.315 read: IOPS=6959, BW=109MiB/s (114MB/s)(218MiB/2008msec) 00:27:28.315 slat (nsec): min=2794, max=90444, avg=3611.76, stdev=1712.74 00:27:28.315 clat (usec): min=3263, max=30824, avg=11461.92, stdev=2736.94 00:27:28.315 lat (usec): min=3266, max=30829, avg=11465.54, stdev=2737.15 00:27:28.315 clat percentiles (usec): 00:27:28.315 | 1.00th=[ 5735], 5.00th=[ 7308], 10.00th=[ 8094], 20.00th=[ 9110], 00:27:28.315 | 30.00th=[ 9896], 40.00th=[10814], 50.00th=[11469], 60.00th=[12125], 00:27:28.315 | 70.00th=[12649], 80.00th=[13435], 90.00th=[14746], 95.00th=[16188], 00:27:28.315 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:27:28.315 | 99.99th=[23462] 00:27:28.315 bw ( KiB/s): min=49632, max=62400, per=49.73%, avg=55376.00, stdev=6315.31, samples=4 00:27:28.315 iops : min= 3102, max= 3900, avg=3461.00, stdev=394.71, samples=4 00:27:28.315 write: IOPS=4031, BW=63.0MiB/s (66.0MB/s)(114MiB/1804msec); 0 zone resets 00:27:28.315 slat (usec): min=30, max=190, avg=33.54, stdev= 5.49 00:27:28.315 clat (usec): min=5503, max=26182, avg=12431.78, stdev=2576.89 00:27:28.315 lat (usec): min=5535, max=26214, avg=12465.33, stdev=2578.18 00:27:28.315 clat percentiles (usec): 00:27:28.315 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10290], 00:27:28.315 | 30.00th=[10945], 40.00th=[11469], 50.00th=[12125], 60.00th=[12780], 00:27:28.315 | 70.00th=[13435], 80.00th=[14222], 90.00th=[15664], 95.00th=[17695], 00:27:28.315 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21890], 99.95th=[23200], 00:27:28.315 | 99.99th=[26084] 00:27:28.315 bw ( KiB/s): min=51776, max=65472, per=89.64%, avg=57816.00, stdev=6535.70, samples=4 00:27:28.315 iops : min= 3236, max= 4092, avg=3613.50, stdev=408.48, samples=4 00:27:28.315 lat (msec) : 4=0.05%, 10=25.32%, 20=73.84%, 50=0.79% 00:27:28.315 cpu : usr=78.43%, sys=18.63%, ctx=16, majf=0, minf=2 00:27:28.315 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:28.315 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:28.315 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:28.315 issued rwts: total=13975,7272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:28.315 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:28.315 00:27:28.315 Run status group 0 (all jobs): 00:27:28.315 READ: bw=109MiB/s (114MB/s), 109MiB/s-109MiB/s (114MB/s-114MB/s), io=218MiB (229MB), run=2008-2008msec 00:27:28.315 WRITE: bw=63.0MiB/s (66.0MB/s), 63.0MiB/s-63.0MiB/s (66.0MB/s-66.0MB/s), io=114MiB (119MB), run=1804-1804msec 00:27:28.315 02:02:13 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:28.315 02:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.315 02:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:28.315 02:02:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:28.315 02:02:13 -- host/fio.sh@47 -- # '[' 1 -eq 1 ']' 00:27:28.315 02:02:13 -- host/fio.sh@49 -- # bdfs=($(get_nvme_bdfs)) 00:27:28.315 02:02:13 -- host/fio.sh@49 -- # get_nvme_bdfs 00:27:28.315 02:02:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:27:28.315 02:02:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:27:28.315 02:02:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:28.315 02:02:13 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:28.315 02:02:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:27:28.315 02:02:13 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:27:28.315 02:02:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:27:28.315 02:02:13 -- host/fio.sh@50 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:27:28.315 02:02:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:28.315 02:02:13 -- common/autotest_common.sh@10 -- # set +x 00:27:30.842 Nvme0n1 00:27:30.842 02:02:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:30.842 02:02:16 -- host/fio.sh@51 -- # rpc_cmd bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:30.842 02:02:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:30.842 02:02:16 -- common/autotest_common.sh@10 -- # set +x 00:27:33.369 02:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.369 02:02:18 -- host/fio.sh@51 -- # ls_guid=0e211036-c1bb-4340-9461-61d06d85ac39 00:27:33.369 02:02:18 -- host/fio.sh@52 -- # get_lvs_free_mb 0e211036-c1bb-4340-9461-61d06d85ac39 00:27:33.369 02:02:18 -- common/autotest_common.sh@1343 -- # local lvs_uuid=0e211036-c1bb-4340-9461-61d06d85ac39 00:27:33.369 02:02:18 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:33.369 02:02:18 -- common/autotest_common.sh@1345 -- # local fc 00:27:33.369 02:02:18 -- common/autotest_common.sh@1346 -- # local cs 00:27:33.369 02:02:18 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:33.369 02:02:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.369 02:02:18 -- common/autotest_common.sh@10 -- # set +x 00:27:33.369 02:02:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.369 02:02:18 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:33.369 { 00:27:33.369 "uuid": "0e211036-c1bb-4340-9461-61d06d85ac39", 00:27:33.369 "name": "lvs_0", 00:27:33.369 "base_bdev": "Nvme0n1", 00:27:33.369 "total_data_clusters": 930, 00:27:33.369 "free_clusters": 930, 00:27:33.369 "block_size": 512, 00:27:33.369 "cluster_size": 1073741824 00:27:33.369 } 00:27:33.369 ]' 00:27:33.369 02:02:18 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="0e211036-c1bb-4340-9461-61d06d85ac39") .free_clusters' 00:27:33.369 02:02:18 -- common/autotest_common.sh@1348 -- # fc=930 00:27:33.369 02:02:18 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="0e211036-c1bb-4340-9461-61d06d85ac39") .cluster_size' 00:27:33.369 02:02:19 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:27:33.369 02:02:19 -- common/autotest_common.sh@1352 -- # free_mb=952320 00:27:33.369 02:02:19 -- common/autotest_common.sh@1353 -- # echo 952320 00:27:33.369 952320 00:27:33.369 02:02:19 -- host/fio.sh@53 -- # rpc_cmd bdev_lvol_create -l lvs_0 lbd_0 952320 00:27:33.369 02:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.369 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:27:33.626 c2042631-ecb0-4288-891f-4be2b480f935 00:27:33.626 02:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.626 02:02:19 -- host/fio.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:33.626 02:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.626 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:27:33.626 02:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.626 02:02:19 -- host/fio.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:33.626 02:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.626 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:27:33.626 02:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.626 02:02:19 -- host/fio.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:33.626 02:02:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:33.626 02:02:19 -- common/autotest_common.sh@10 -- # set +x 00:27:33.626 02:02:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:33.626 02:02:19 -- host/fio.sh@57 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:33.626 02:02:19 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:33.626 02:02:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:33.626 02:02:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:33.626 02:02:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:33.626 02:02:19 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:33.626 02:02:19 -- common/autotest_common.sh@1320 -- # shift 00:27:33.626 02:02:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:33.626 02:02:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:33.626 02:02:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:33.626 02:02:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:33.626 02:02:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:33.626 02:02:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:33.626 02:02:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:33.626 02:02:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:33.884 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:33.884 fio-3.35 00:27:33.884 Starting 1 thread 00:27:33.884 EAL: No free 2048 kB hugepages reported on node 1 00:27:36.410 00:27:36.410 test: (groupid=0, jobs=1): err= 0: pid=2260302: Mon Apr 15 02:02:21 2024 00:27:36.410 read: IOPS=6420, BW=25.1MiB/s (26.3MB/s)(50.3MiB/2007msec) 00:27:36.410 slat (nsec): min=1952, max=149806, avg=2459.73, stdev=1877.21 00:27:36.410 clat (usec): min=1388, max=171525, avg=11015.45, stdev=11334.37 00:27:36.410 lat (usec): min=1391, max=171565, avg=11017.91, stdev=11334.65 00:27:36.410 clat percentiles (msec): 00:27:36.410 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:27:36.410 | 30.00th=[ 10], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:27:36.410 | 70.00th=[ 11], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 12], 00:27:36.410 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:27:36.410 | 99.99th=[ 171] 00:27:36.410 bw ( KiB/s): min=17704, max=28408, per=99.84%, avg=25640.00, stdev=5291.58, samples=4 00:27:36.410 iops : min= 4426, max= 7102, avg=6410.50, stdev=1323.23, samples=4 00:27:36.410 write: IOPS=6425, BW=25.1MiB/s (26.3MB/s)(50.4MiB/2007msec); 0 zone resets 00:27:36.410 slat (usec): min=2, max=102, avg= 2.54, stdev= 1.32 00:27:36.410 clat (usec): min=701, max=169921, avg=8739.15, stdev=10626.16 00:27:36.410 lat (usec): min=704, max=169927, avg=8741.69, stdev=10626.40 00:27:36.410 clat percentiles (msec): 00:27:36.410 | 1.00th=[ 6], 5.00th=[ 7], 10.00th=[ 7], 20.00th=[ 8], 00:27:36.410 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:27:36.410 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 10], 95.00th=[ 10], 00:27:36.410 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:27:36.410 | 99.99th=[ 171] 00:27:36.410 bw ( KiB/s): min=18688, max=28160, per=99.86%, avg=25664.00, stdev=4652.27, samples=4 00:27:36.410 iops : min= 4672, max= 7040, avg=6416.00, stdev=1163.07, samples=4 00:27:36.410 lat (usec) : 750=0.01%, 1000=0.01% 00:27:36.410 lat (msec) : 2=0.03%, 4=0.14%, 10=69.84%, 20=29.48%, 250=0.50% 00:27:36.410 cpu : usr=50.20%, sys=41.13%, ctx=94, majf=0, minf=20 00:27:36.410 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:27:36.410 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:36.410 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:36.410 issued rwts: total=12885,12895,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:36.410 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:36.410 00:27:36.410 Run status group 0 (all jobs): 00:27:36.410 READ: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.3MiB (52.8MB), run=2007-2007msec 00:27:36.410 WRITE: bw=25.1MiB/s (26.3MB/s), 25.1MiB/s-25.1MiB/s (26.3MB/s-26.3MB/s), io=50.4MiB (52.8MB), run=2007-2007msec 00:27:36.410 02:02:21 -- host/fio.sh@59 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:36.410 02:02:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:36.410 02:02:21 -- common/autotest_common.sh@10 -- # set +x 00:27:36.410 02:02:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:36.410 02:02:21 -- host/fio.sh@62 -- # rpc_cmd bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:36.410 02:02:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:36.410 02:02:21 -- common/autotest_common.sh@10 -- # set +x 00:27:37.340 02:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.340 02:02:22 -- host/fio.sh@62 -- # ls_nested_guid=fe73af46-04aa-410f-aa17-0d72ceb812e2 00:27:37.340 02:02:22 -- host/fio.sh@63 -- # get_lvs_free_mb fe73af46-04aa-410f-aa17-0d72ceb812e2 00:27:37.340 02:02:22 -- common/autotest_common.sh@1343 -- # local lvs_uuid=fe73af46-04aa-410f-aa17-0d72ceb812e2 00:27:37.340 02:02:22 -- common/autotest_common.sh@1344 -- # local lvs_info 00:27:37.340 02:02:22 -- common/autotest_common.sh@1345 -- # local fc 00:27:37.340 02:02:22 -- common/autotest_common.sh@1346 -- # local cs 00:27:37.340 02:02:22 -- common/autotest_common.sh@1347 -- # rpc_cmd bdev_lvol_get_lvstores 00:27:37.340 02:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.340 02:02:22 -- common/autotest_common.sh@10 -- # set +x 00:27:37.340 02:02:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.340 02:02:22 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:27:37.340 { 00:27:37.340 "uuid": "0e211036-c1bb-4340-9461-61d06d85ac39", 00:27:37.340 "name": "lvs_0", 00:27:37.340 "base_bdev": "Nvme0n1", 00:27:37.340 "total_data_clusters": 930, 00:27:37.340 "free_clusters": 0, 00:27:37.340 "block_size": 512, 00:27:37.340 "cluster_size": 1073741824 00:27:37.340 }, 00:27:37.340 { 00:27:37.340 "uuid": "fe73af46-04aa-410f-aa17-0d72ceb812e2", 00:27:37.340 "name": "lvs_n_0", 00:27:37.340 "base_bdev": "c2042631-ecb0-4288-891f-4be2b480f935", 00:27:37.340 "total_data_clusters": 237847, 00:27:37.340 "free_clusters": 237847, 00:27:37.340 "block_size": 512, 00:27:37.340 "cluster_size": 4194304 00:27:37.340 } 00:27:37.340 ]' 00:27:37.340 02:02:22 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="fe73af46-04aa-410f-aa17-0d72ceb812e2") .free_clusters' 00:27:37.340 02:02:22 -- common/autotest_common.sh@1348 -- # fc=237847 00:27:37.340 02:02:22 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="fe73af46-04aa-410f-aa17-0d72ceb812e2") .cluster_size' 00:27:37.340 02:02:22 -- common/autotest_common.sh@1349 -- # cs=4194304 00:27:37.340 02:02:22 -- common/autotest_common.sh@1352 -- # free_mb=951388 00:27:37.340 02:02:22 -- common/autotest_common.sh@1353 -- # echo 951388 00:27:37.341 951388 00:27:37.341 02:02:22 -- host/fio.sh@64 -- # rpc_cmd bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:27:37.341 02:02:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.341 02:02:22 -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 fcb0e312-ff70-4a5e-82cc-d6ea03ba09f4 00:27:37.907 02:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.907 02:02:23 -- host/fio.sh@65 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:37.907 02:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.907 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 02:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.907 02:02:23 -- host/fio.sh@66 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:37.907 02:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.907 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 02:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.907 02:02:23 -- host/fio.sh@67 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:27:37.907 02:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:37.907 02:02:23 -- common/autotest_common.sh@10 -- # set +x 00:27:37.907 02:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:37.907 02:02:23 -- host/fio.sh@68 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.907 02:02:23 -- common/autotest_common.sh@1339 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.907 02:02:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:27:37.907 02:02:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:37.907 02:02:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:27:37.907 02:02:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.907 02:02:23 -- common/autotest_common.sh@1320 -- # shift 00:27:37.907 02:02:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:27:37.907 02:02:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.907 02:02:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.907 02:02:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:27:37.907 02:02:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:27:37.907 02:02:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:27:37.907 02:02:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:37.907 02:02:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:27:37.907 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:37.907 fio-3.35 00:27:37.907 Starting 1 thread 00:27:37.907 EAL: No free 2048 kB hugepages reported on node 1 00:27:40.435 00:27:40.435 test: (groupid=0, jobs=1): err= 0: pid=2260907: Mon Apr 15 02:02:25 2024 00:27:40.435 read: IOPS=6155, BW=24.0MiB/s (25.2MB/s)(48.3MiB/2009msec) 00:27:40.435 slat (nsec): min=1958, max=169521, avg=2573.41, stdev=2412.74 00:27:40.435 clat (usec): min=5923, max=19322, avg=11520.74, stdev=1003.18 00:27:40.435 lat (usec): min=5932, max=19324, avg=11523.31, stdev=1003.11 00:27:40.435 clat percentiles (usec): 00:27:40.435 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10683], 00:27:40.435 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11469], 60.00th=[11731], 00:27:40.435 | 70.00th=[11994], 80.00th=[12387], 90.00th=[12780], 95.00th=[13042], 00:27:40.435 | 99.00th=[13829], 99.50th=[14091], 99.90th=[17171], 99.95th=[17957], 00:27:40.435 | 99.99th=[18482] 00:27:40.435 bw ( KiB/s): min=23280, max=25096, per=99.89%, avg=24594.00, stdev=877.27, samples=4 00:27:40.435 iops : min= 5820, max= 6274, avg=6148.50, stdev=219.32, samples=4 00:27:40.435 write: IOPS=6138, BW=24.0MiB/s (25.1MB/s)(48.2MiB/2009msec); 0 zone resets 00:27:40.435 slat (usec): min=2, max=134, avg= 2.62, stdev= 1.88 00:27:40.435 clat (usec): min=2888, max=17357, avg=9139.41, stdev=896.71 00:27:40.435 lat (usec): min=2895, max=17359, avg=9142.03, stdev=896.67 00:27:40.435 clat percentiles (usec): 00:27:40.435 | 1.00th=[ 7111], 5.00th=[ 7767], 10.00th=[ 8094], 20.00th=[ 8455], 00:27:40.435 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9372], 00:27:40.435 | 70.00th=[ 9503], 80.00th=[ 9765], 90.00th=[10159], 95.00th=[10421], 00:27:40.435 | 99.00th=[11076], 99.50th=[11469], 99.90th=[16057], 99.95th=[16319], 00:27:40.435 | 99.99th=[17433] 00:27:40.435 bw ( KiB/s): min=24384, max=24824, per=99.98%, avg=24548.00, stdev=207.64, samples=4 00:27:40.435 iops : min= 6096, max= 6206, avg=6137.00, stdev=51.91, samples=4 00:27:40.435 lat (msec) : 4=0.02%, 10=45.83%, 20=54.15% 00:27:40.435 cpu : usr=49.20%, sys=42.53%, ctx=77, majf=0, minf=20 00:27:40.435 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:27:40.435 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.435 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:40.435 issued rwts: total=12366,12332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.435 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:40.435 00:27:40.435 Run status group 0 (all jobs): 00:27:40.435 READ: bw=24.0MiB/s (25.2MB/s), 24.0MiB/s-24.0MiB/s (25.2MB/s-25.2MB/s), io=48.3MiB (50.7MB), run=2009-2009msec 00:27:40.435 WRITE: bw=24.0MiB/s (25.1MB/s), 24.0MiB/s-24.0MiB/s (25.1MB/s-25.1MB/s), io=48.2MiB (50.5MB), run=2009-2009msec 00:27:40.435 02:02:25 -- host/fio.sh@70 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:40.435 02:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.435 02:02:25 -- common/autotest_common.sh@10 -- # set +x 00:27:40.435 02:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:40.435 02:02:25 -- host/fio.sh@72 -- # sync 00:27:40.435 02:02:25 -- host/fio.sh@74 -- # rpc_cmd bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:40.435 02:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:40.435 02:02:25 -- common/autotest_common.sh@10 -- # set +x 00:27:44.617 02:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.617 02:02:29 -- host/fio.sh@75 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_n_0 00:27:44.617 02:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.617 02:02:29 -- common/autotest_common.sh@10 -- # set +x 00:27:44.617 02:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:44.617 02:02:29 -- host/fio.sh@76 -- # rpc_cmd bdev_lvol_delete lvs_0/lbd_0 00:27:44.617 02:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:44.617 02:02:29 -- common/autotest_common.sh@10 -- # set +x 00:27:46.515 02:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.515 02:02:32 -- host/fio.sh@77 -- # rpc_cmd bdev_lvol_delete_lvstore -l lvs_0 00:27:46.515 02:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.515 02:02:32 -- common/autotest_common.sh@10 -- # set +x 00:27:46.515 02:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:46.515 02:02:32 -- host/fio.sh@78 -- # rpc_cmd bdev_nvme_detach_controller Nvme0 00:27:46.515 02:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:27:46.516 02:02:32 -- common/autotest_common.sh@10 -- # set +x 00:27:48.479 02:02:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:27:48.479 02:02:33 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:27:48.479 02:02:33 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:27:48.479 02:02:33 -- host/fio.sh@84 -- # nvmftestfini 00:27:48.479 02:02:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:48.479 02:02:33 -- nvmf/common.sh@116 -- # sync 00:27:48.479 02:02:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:27:48.479 02:02:33 -- nvmf/common.sh@119 -- # set +e 00:27:48.479 02:02:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:48.479 02:02:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:27:48.479 rmmod nvme_tcp 00:27:48.479 rmmod nvme_fabrics 00:27:48.479 rmmod nvme_keyring 00:27:48.479 02:02:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:48.479 02:02:33 -- nvmf/common.sh@123 -- # set -e 00:27:48.479 02:02:33 -- nvmf/common.sh@124 -- # return 0 00:27:48.479 02:02:33 -- nvmf/common.sh@477 -- # '[' -n 2258561 ']' 00:27:48.479 02:02:33 -- nvmf/common.sh@478 -- # killprocess 2258561 00:27:48.479 02:02:33 -- common/autotest_common.sh@926 -- # '[' -z 2258561 ']' 00:27:48.479 02:02:33 -- common/autotest_common.sh@930 -- # kill -0 2258561 00:27:48.479 02:02:33 -- common/autotest_common.sh@931 -- # uname 00:27:48.479 02:02:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:27:48.479 02:02:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2258561 00:27:48.479 02:02:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:27:48.479 02:02:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:27:48.479 02:02:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2258561' 00:27:48.479 killing process with pid 2258561 00:27:48.479 02:02:33 -- common/autotest_common.sh@945 -- # kill 2258561 00:27:48.479 02:02:33 -- common/autotest_common.sh@950 -- # wait 2258561 00:27:48.739 02:02:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:48.739 02:02:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:27:48.739 02:02:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:27:48.739 02:02:34 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:48.739 02:02:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:27:48.739 02:02:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.739 02:02:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.739 02:02:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.645 02:02:36 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:27:50.645 00:27:50.645 real 0m31.016s 00:27:50.645 user 1m51.627s 00:27:50.645 sys 0m5.912s 00:27:50.645 02:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.645 02:02:36 -- common/autotest_common.sh@10 -- # set +x 00:27:50.645 ************************************ 00:27:50.645 END TEST nvmf_fio_host 00:27:50.645 ************************************ 00:27:50.646 02:02:36 -- nvmf/nvmf.sh@99 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:50.646 02:02:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:27:50.646 02:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:27:50.646 02:02:36 -- common/autotest_common.sh@10 -- # set +x 00:27:50.646 ************************************ 00:27:50.646 START TEST nvmf_failover 00:27:50.646 ************************************ 00:27:50.646 02:02:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:27:50.646 * Looking for test storage... 00:27:50.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:50.646 02:02:36 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.646 02:02:36 -- nvmf/common.sh@7 -- # uname -s 00:27:50.646 02:02:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.646 02:02:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.646 02:02:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.646 02:02:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.646 02:02:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.646 02:02:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.646 02:02:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.646 02:02:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.646 02:02:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.646 02:02:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.646 02:02:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.646 02:02:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:50.646 02:02:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.646 02:02:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.646 02:02:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.646 02:02:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.646 02:02:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.646 02:02:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.646 02:02:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.646 02:02:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.646 02:02:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.646 02:02:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.646 02:02:36 -- paths/export.sh@5 -- # export PATH 00:27:50.646 02:02:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.646 02:02:36 -- nvmf/common.sh@46 -- # : 0 00:27:50.646 02:02:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:50.646 02:02:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:50.646 02:02:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:50.646 02:02:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.646 02:02:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.646 02:02:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:50.646 02:02:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:50.646 02:02:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:50.646 02:02:36 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:50.646 02:02:36 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:50.646 02:02:36 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:50.646 02:02:36 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:50.646 02:02:36 -- host/failover.sh@18 -- # nvmftestinit 00:27:50.646 02:02:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:27:50.646 02:02:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.646 02:02:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:50.646 02:02:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:50.646 02:02:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:50.646 02:02:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.646 02:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.646 02:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.646 02:02:36 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:50.646 02:02:36 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:50.646 02:02:36 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:50.646 02:02:36 -- common/autotest_common.sh@10 -- # set +x 00:27:52.547 02:02:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:52.547 02:02:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:52.547 02:02:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:52.547 02:02:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:52.547 02:02:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:52.547 02:02:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:52.547 02:02:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:52.547 02:02:38 -- nvmf/common.sh@294 -- # net_devs=() 00:27:52.547 02:02:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:52.547 02:02:38 -- nvmf/common.sh@295 -- # e810=() 00:27:52.547 02:02:38 -- nvmf/common.sh@295 -- # local -ga e810 00:27:52.547 02:02:38 -- nvmf/common.sh@296 -- # x722=() 00:27:52.547 02:02:38 -- nvmf/common.sh@296 -- # local -ga x722 00:27:52.547 02:02:38 -- nvmf/common.sh@297 -- # mlx=() 00:27:52.547 02:02:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:52.547 02:02:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.548 02:02:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:52.548 02:02:38 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:27:52.548 02:02:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.548 02:02:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.548 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.548 02:02:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:52.548 02:02:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.548 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.548 02:02:38 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.548 02:02:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.548 02:02:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.548 02:02:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.548 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.548 02:02:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.548 02:02:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:52.548 02:02:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.548 02:02:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.548 02:02:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.548 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.548 02:02:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.548 02:02:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:52.548 02:02:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:27:52.548 02:02:38 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:27:52.548 02:02:38 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.548 02:02:38 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.548 02:02:38 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.548 02:02:38 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:27:52.548 02:02:38 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.548 02:02:38 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.548 02:02:38 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:27:52.548 02:02:38 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.548 02:02:38 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.548 02:02:38 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:27:52.548 02:02:38 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:27:52.548 02:02:38 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.548 02:02:38 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.548 02:02:38 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.548 02:02:38 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.548 02:02:38 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:27:52.548 02:02:38 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.807 02:02:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.807 02:02:38 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.807 02:02:38 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:27:52.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:27:52.807 00:27:52.807 --- 10.0.0.2 ping statistics --- 00:27:52.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.807 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:27:52.807 02:02:38 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:27:52.807 00:27:52.807 --- 10.0.0.1 ping statistics --- 00:27:52.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.807 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:27:52.807 02:02:38 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.807 02:02:38 -- nvmf/common.sh@410 -- # return 0 00:27:52.807 02:02:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:52.807 02:02:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.807 02:02:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:27:52.807 02:02:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:27:52.807 02:02:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.807 02:02:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:27:52.807 02:02:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:27:52.807 02:02:38 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:27:52.807 02:02:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:27:52.807 02:02:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:27:52.807 02:02:38 -- common/autotest_common.sh@10 -- # set +x 00:27:52.807 02:02:38 -- nvmf/common.sh@469 -- # nvmfpid=2264065 00:27:52.807 02:02:38 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:52.807 02:02:38 -- nvmf/common.sh@470 -- # waitforlisten 2264065 00:27:52.807 02:02:38 -- common/autotest_common.sh@819 -- # '[' -z 2264065 ']' 00:27:52.807 02:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.807 02:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:52.807 02:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.808 02:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:52.808 02:02:38 -- common/autotest_common.sh@10 -- # set +x 00:27:52.808 [2024-04-15 02:02:38.326873] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:27:52.808 [2024-04-15 02:02:38.326962] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.808 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.808 [2024-04-15 02:02:38.397068] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:53.066 [2024-04-15 02:02:38.486156] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:53.066 [2024-04-15 02:02:38.486320] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:53.066 [2024-04-15 02:02:38.486349] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:53.066 [2024-04-15 02:02:38.486364] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:53.066 [2024-04-15 02:02:38.486467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:53.066 [2024-04-15 02:02:38.486566] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:53.066 [2024-04-15 02:02:38.486568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:53.631 02:02:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:53.631 02:02:39 -- common/autotest_common.sh@852 -- # return 0 00:27:53.631 02:02:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:27:53.631 02:02:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:27:53.631 02:02:39 -- common/autotest_common.sh@10 -- # set +x 00:27:53.631 02:02:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.631 02:02:39 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:53.888 [2024-04-15 02:02:39.489612] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:53.888 02:02:39 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:54.145 Malloc0 00:27:54.145 02:02:39 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:54.403 02:02:39 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:54.661 02:02:40 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:54.918 [2024-04-15 02:02:40.485924] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:54.918 02:02:40 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:55.176 [2024-04-15 02:02:40.730680] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:55.176 02:02:40 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:55.433 [2024-04-15 02:02:40.963505] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:55.433 02:02:40 -- host/failover.sh@31 -- # bdevperf_pid=2264365 00:27:55.433 02:02:40 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:27:55.433 02:02:40 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:55.433 02:02:40 -- host/failover.sh@34 -- # waitforlisten 2264365 /var/tmp/bdevperf.sock 00:27:55.433 02:02:40 -- common/autotest_common.sh@819 -- # '[' -z 2264365 ']' 00:27:55.433 02:02:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:55.433 02:02:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:27:55.433 02:02:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:55.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:55.433 02:02:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:27:55.433 02:02:40 -- common/autotest_common.sh@10 -- # set +x 00:27:56.365 02:02:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:27:56.365 02:02:41 -- common/autotest_common.sh@852 -- # return 0 00:27:56.365 02:02:41 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:56.929 NVMe0n1 00:27:56.929 02:02:42 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:57.187 00:27:57.187 02:02:42 -- host/failover.sh@39 -- # run_test_pid=2264639 00:27:57.187 02:02:42 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:57.187 02:02:42 -- host/failover.sh@41 -- # sleep 1 00:27:58.120 02:02:43 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:58.378 [2024-04-15 02:02:43.840341] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840481] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840514] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840539] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840553] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840580] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840616] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840627] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840639] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840686] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840698] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840717] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840729] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840741] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840753] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840810] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840822] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840833] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840845] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840879] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840914] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.840994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841006] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841017] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841093] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.378 [2024-04-15 02:02:43.841129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841141] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841164] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841212] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 [2024-04-15 02:02:43.841259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2119510 is same with the state(5) to be set 00:27:58.379 02:02:43 -- host/failover.sh@45 -- # sleep 3 00:28:01.661 02:02:46 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:01.661 00:28:01.918 02:02:47 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:01.919 [2024-04-15 02:02:47.547787] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547860] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547876] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547925] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547947] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.547995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548020] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548079] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548103] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548127] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548162] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548186] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548210] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548221] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548233] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548256] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548268] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548285] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548310] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548337] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548362] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548385] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548396] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548420] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548443] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548500] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548534] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548545] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:01.919 [2024-04-15 02:02:47.548556] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211aa60 is same with the state(5) to be set 00:28:02.177 02:02:47 -- host/failover.sh@50 -- # sleep 3 00:28:05.456 02:02:50 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:05.456 [2024-04-15 02:02:50.808396] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:05.456 02:02:50 -- host/failover.sh@55 -- # sleep 1 00:28:06.390 02:02:51 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:06.680 [2024-04-15 02:02:52.051176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051277] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051289] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051302] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051327] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051351] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051363] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051403] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051427] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051522] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051554] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051567] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051579] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051604] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051632] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051648] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051660] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051671] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051683] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051695] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051720] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051788] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051813] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051837] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051849] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051897] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051920] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.051988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052135] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052172] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052196] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052208] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.680 [2024-04-15 02:02:52.052219] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052232] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052255] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052291] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 [2024-04-15 02:02:52.052303] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b140 is same with the state(5) to be set 00:28:06.681 02:02:52 -- host/failover.sh@59 -- # wait 2264639 00:28:13.278 0 00:28:13.278 02:02:57 -- host/failover.sh@61 -- # killprocess 2264365 00:28:13.278 02:02:57 -- common/autotest_common.sh@926 -- # '[' -z 2264365 ']' 00:28:13.278 02:02:57 -- common/autotest_common.sh@930 -- # kill -0 2264365 00:28:13.278 02:02:57 -- common/autotest_common.sh@931 -- # uname 00:28:13.278 02:02:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:13.278 02:02:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2264365 00:28:13.278 02:02:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:13.278 02:02:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:13.278 02:02:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2264365' 00:28:13.278 killing process with pid 2264365 00:28:13.278 02:02:57 -- common/autotest_common.sh@945 -- # kill 2264365 00:28:13.278 02:02:57 -- common/autotest_common.sh@950 -- # wait 2264365 00:28:13.278 02:02:58 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:13.278 [2024-04-15 02:02:41.019285] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:13.278 [2024-04-15 02:02:41.019386] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2264365 ] 00:28:13.278 EAL: No free 2048 kB hugepages reported on node 1 00:28:13.278 [2024-04-15 02:02:41.080108] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:13.278 [2024-04-15 02:02:41.164578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.278 Running I/O for 15 seconds... 00:28:13.278 [2024-04-15 02:02:43.841713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:122064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:122080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:122104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:122112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:122128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:122144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:122160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.841980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.841996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:122168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:122176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:121632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.278 [2024-04-15 02:02:43.842257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.278 [2024-04-15 02:02:43.842273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:122192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:122200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:122216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:122224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:122232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:122264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:122272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:122280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:122288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:122304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:122312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:121704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.842893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:122328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.279 [2024-04-15 02:02:43.842926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.279 [2024-04-15 02:02:43.842956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.842971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.279 [2024-04-15 02:02:43.842986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:122352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:122360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:122376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:122384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:122392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.279 [2024-04-15 02:02:43.843202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:122400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.279 [2024-04-15 02:02:43.843495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.279 [2024-04-15 02:02:43.843510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:122408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.279 [2024-04-15 02:02:43.843524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:122416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:122432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:122440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:122448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:122456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:122472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.843768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:122480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:122488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:122496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.843856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:122504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.843885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.843914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.843942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.843972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.843987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:122536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:122544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:122552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:122568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:122584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:122600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:121960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:121984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:122008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:122016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:122024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:122616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:122624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:122632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:122640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:122648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.280 [2024-04-15 02:02:43.844695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.280 [2024-04-15 02:02:43.844724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.280 [2024-04-15 02:02:43.844740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:122040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:122048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:122056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:122088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:122120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.844962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.844977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:122664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.844990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:122672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:122680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:122688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:122696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:122704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:122712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:122720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:122728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:122736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:122744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:122752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:122760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:122768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:122776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:122800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.281 [2024-04-15 02:02:43.845531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:122152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:122184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:122208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:122240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:122248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:122256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:122296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.281 [2024-04-15 02:02:43.845750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845765] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e42320 is same with the state(5) to be set 00:28:13.281 [2024-04-15 02:02:43.845783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:13.281 [2024-04-15 02:02:43.845795] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:13.281 [2024-04-15 02:02:43.845811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:122320 len:8 PRP1 0x0 PRP2 0x0 00:28:13.281 [2024-04-15 02:02:43.845825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845889] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e42320 was disconnected and freed. reset controller. 00:28:13.281 [2024-04-15 02:02:43.845908] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:13.281 [2024-04-15 02:02:43.845940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.281 [2024-04-15 02:02:43.845958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.845974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.281 [2024-04-15 02:02:43.845988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.846002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.281 [2024-04-15 02:02:43.846015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.281 [2024-04-15 02:02:43.846029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.282 [2024-04-15 02:02:43.846043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:43.846064] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.282 [2024-04-15 02:02:43.846116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23790 (9): Bad file descriptor 00:28:13.282 [2024-04-15 02:02:43.848287] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.282 [2024-04-15 02:02:43.963811] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.282 [2024-04-15 02:02:47.548817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:36160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.548861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.548894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:36168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.548911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.548928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:35608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.548943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.548959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:35624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.548974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.548990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:35648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:35664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:35680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:35688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:36200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:36224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:36240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:36248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:36256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:36280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:36312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:36352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:35704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:35752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:35768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:35800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:35824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:35832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:35840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:35864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:36384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.282 [2024-04-15 02:02:47.549780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:36392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.282 [2024-04-15 02:02:47.549810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.282 [2024-04-15 02:02:47.549825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:36400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.549839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.549854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:35872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.549868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.549883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:35888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.549897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.549912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:35896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.549927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.549942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:35904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.549955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.549970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:35928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.549984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:35960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:35968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:35976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:36408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:36416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:36424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:36432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:36440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:36448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:36456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:36464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:36472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:36480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:36488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:36496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:36504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:36512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:36520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:36528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:36536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:36544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:36552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:36560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:36568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:36576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:36592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:36600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.550895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:36616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:36624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:36632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.283 [2024-04-15 02:02:47.550983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.283 [2024-04-15 02:02:47.550998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:35992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.283 [2024-04-15 02:02:47.551012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:36032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:36056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:36064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:36072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:36088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:36128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:36144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:36640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:36648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:36664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:36672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:36688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:36696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:36704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:36712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:36720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:36728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:36744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:36752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:36176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:36208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:36216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:36264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:36288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.551963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.551978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:36760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.551992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:36768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.552021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:36776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.552081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:36784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.552112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.552142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:36800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.552172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:36808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.284 [2024-04-15 02:02:47.552202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:36816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.552232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.284 [2024-04-15 02:02:47.552247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:36824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.284 [2024-04-15 02:02:47.552261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:36840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:36848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.285 [2024-04-15 02:02:47.552367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.285 [2024-04-15 02:02:47.552401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:36864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.285 [2024-04-15 02:02:47.552432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:36872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:36880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:36888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.285 [2024-04-15 02:02:47.552524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:36896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:36904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:36912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.285 [2024-04-15 02:02:47.552610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:36296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:36304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:36320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:36328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:36336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:36344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:36368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:47.552845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e2fcb0 is same with the state(5) to be set 00:28:13.285 [2024-04-15 02:02:47.552881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:13.285 [2024-04-15 02:02:47.552893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:13.285 [2024-04-15 02:02:47.552911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36376 len:8 PRP1 0x0 PRP2 0x0 00:28:13.285 [2024-04-15 02:02:47.552926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.552987] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e2fcb0 was disconnected and freed. reset controller. 00:28:13.285 [2024-04-15 02:02:47.553006] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:28:13.285 [2024-04-15 02:02:47.553038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.285 [2024-04-15 02:02:47.553064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.553080] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.285 [2024-04-15 02:02:47.553094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.553108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.285 [2024-04-15 02:02:47.553136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.553151] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.285 [2024-04-15 02:02:47.553165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:47.553179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.285 [2024-04-15 02:02:47.555271] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.285 [2024-04-15 02:02:47.555310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23790 (9): Bad file descriptor 00:28:13.285 [2024-04-15 02:02:47.584016] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.285 [2024-04-15 02:02:52.052530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.285 [2024-04-15 02:02:52.052958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.285 [2024-04-15 02:02:52.052972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.052987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.053942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.053971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.053986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.054000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.054030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.054112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.054142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.286 [2024-04-15 02:02:52.054173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.054202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.286 [2024-04-15 02:02:52.054218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.286 [2024-04-15 02:02:52.054232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.054296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.054357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.054402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:100240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:100264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:100272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.054817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:100288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:100296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.054904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:100320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.054978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:100328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.054993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:100336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.055112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:100360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.287 [2024-04-15 02:02:52.055173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.287 [2024-04-15 02:02:52.055279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.287 [2024-04-15 02:02:52.055294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:100424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:100448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:100464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:100472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:100480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:100496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.055938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:100512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.055983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:100520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.055997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:100536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:100544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:13.288 [2024-04-15 02:02:52.056282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:100600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.288 [2024-04-15 02:02:52.056418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.288 [2024-04-15 02:02:52.056432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.289 [2024-04-15 02:02:52.056462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.289 [2024-04-15 02:02:52.056492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.289 [2024-04-15 02:02:52.056522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056537] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e46230 is same with the state(5) to be set 00:28:13.289 [2024-04-15 02:02:52.056554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:13.289 [2024-04-15 02:02:52.056566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:13.289 [2024-04-15 02:02:52.056578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:100120 len:8 PRP1 0x0 PRP2 0x0 00:28:13.289 [2024-04-15 02:02:52.056591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056654] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e46230 was disconnected and freed. reset controller. 00:28:13.289 [2024-04-15 02:02:52.056672] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:28:13.289 [2024-04-15 02:02:52.056716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.289 [2024-04-15 02:02:52.056736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.289 [2024-04-15 02:02:52.056783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.289 [2024-04-15 02:02:52.056829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:13.289 [2024-04-15 02:02:52.056874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:13.289 [2024-04-15 02:02:52.056896] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:13.289 [2024-04-15 02:02:52.056960] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23790 (9): Bad file descriptor 00:28:13.289 [2024-04-15 02:02:52.059096] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:13.289 [2024-04-15 02:02:52.093497] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:13.289 00:28:13.289 Latency(us) 00:28:13.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.289 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:13.289 Verification LBA range: start 0x0 length 0x4000 00:28:13.289 NVMe0n1 : 15.01 13018.56 50.85 684.32 0.00 9324.89 916.29 14951.92 00:28:13.289 =================================================================================================================== 00:28:13.289 Total : 13018.56 50.85 684.32 0.00 9324.89 916.29 14951.92 00:28:13.289 Received shutdown signal, test time was about 15.000000 seconds 00:28:13.289 00:28:13.289 Latency(us) 00:28:13.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.289 =================================================================================================================== 00:28:13.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.289 02:02:58 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:13.289 02:02:58 -- host/failover.sh@65 -- # count=3 00:28:13.289 02:02:58 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:13.289 02:02:58 -- host/failover.sh@73 -- # bdevperf_pid=2266410 00:28:13.289 02:02:58 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:13.289 02:02:58 -- host/failover.sh@75 -- # waitforlisten 2266410 /var/tmp/bdevperf.sock 00:28:13.289 02:02:58 -- common/autotest_common.sh@819 -- # '[' -z 2266410 ']' 00:28:13.289 02:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:13.289 02:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:13.289 02:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:13.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:13.289 02:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:13.289 02:02:58 -- common/autotest_common.sh@10 -- # set +x 00:28:13.547 02:02:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:13.547 02:02:59 -- common/autotest_common.sh@852 -- # return 0 00:28:13.547 02:02:59 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:13.804 [2024-04-15 02:02:59.218701] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:13.804 02:02:59 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:28:14.062 [2024-04-15 02:02:59.463362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:28:14.062 02:02:59 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.320 NVMe0n1 00:28:14.320 02:02:59 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:14.886 00:28:14.886 02:03:00 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.144 00:28:15.144 02:03:00 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:15.144 02:03:00 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:15.402 02:03:00 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:15.660 02:03:01 -- host/failover.sh@87 -- # sleep 3 00:28:18.936 02:03:04 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:18.936 02:03:04 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:18.936 02:03:04 -- host/failover.sh@90 -- # run_test_pid=2267343 00:28:18.936 02:03:04 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:18.936 02:03:04 -- host/failover.sh@92 -- # wait 2267343 00:28:20.315 0 00:28:20.315 02:03:05 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:20.315 [2024-04-15 02:02:58.062164] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:20.315 [2024-04-15 02:02:58.062248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2266410 ] 00:28:20.315 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.316 [2024-04-15 02:02:58.121941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.316 [2024-04-15 02:02:58.206916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.316 [2024-04-15 02:03:01.213446] bdev_nvme.c:1829:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:28:20.316 [2024-04-15 02:03:01.213527] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.316 [2024-04-15 02:03:01.213550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.316 [2024-04-15 02:03:01.213581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.316 [2024-04-15 02:03:01.213596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.316 [2024-04-15 02:03:01.213610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.316 [2024-04-15 02:03:01.213623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.316 [2024-04-15 02:03:01.213637] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:20.316 [2024-04-15 02:03:01.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:20.316 [2024-04-15 02:03:01.213664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:20.316 [2024-04-15 02:03:01.213701] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:20.316 [2024-04-15 02:03:01.213730] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1428790 (9): Bad file descriptor 00:28:20.316 [2024-04-15 02:03:01.259502] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:20.316 Running I/O for 1 seconds... 00:28:20.316 00:28:20.316 Latency(us) 00:28:20.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.316 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:20.316 Verification LBA range: start 0x0 length 0x4000 00:28:20.316 NVMe0n1 : 1.01 10873.57 42.47 0.00 0.00 11722.58 1771.90 14369.37 00:28:20.316 =================================================================================================================== 00:28:20.316 Total : 10873.57 42.47 0.00 0.00 11722.58 1771.90 14369.37 00:28:20.316 02:03:05 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.316 02:03:05 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:20.316 02:03:05 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:20.574 02:03:06 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:20.574 02:03:06 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:20.832 02:03:06 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:21.089 02:03:06 -- host/failover.sh@101 -- # sleep 3 00:28:24.370 02:03:09 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:24.370 02:03:09 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:24.370 02:03:09 -- host/failover.sh@108 -- # killprocess 2266410 00:28:24.370 02:03:09 -- common/autotest_common.sh@926 -- # '[' -z 2266410 ']' 00:28:24.370 02:03:09 -- common/autotest_common.sh@930 -- # kill -0 2266410 00:28:24.370 02:03:09 -- common/autotest_common.sh@931 -- # uname 00:28:24.370 02:03:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.370 02:03:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2266410 00:28:24.370 02:03:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:28:24.370 02:03:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:28:24.370 02:03:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2266410' 00:28:24.370 killing process with pid 2266410 00:28:24.370 02:03:09 -- common/autotest_common.sh@945 -- # kill 2266410 00:28:24.370 02:03:09 -- common/autotest_common.sh@950 -- # wait 2266410 00:28:24.628 02:03:10 -- host/failover.sh@110 -- # sync 00:28:24.628 02:03:10 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:24.886 02:03:10 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:24.886 02:03:10 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:24.886 02:03:10 -- host/failover.sh@116 -- # nvmftestfini 00:28:24.886 02:03:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:24.886 02:03:10 -- nvmf/common.sh@116 -- # sync 00:28:24.886 02:03:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:24.886 02:03:10 -- nvmf/common.sh@119 -- # set +e 00:28:24.886 02:03:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:24.886 02:03:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:24.886 rmmod nvme_tcp 00:28:24.886 rmmod nvme_fabrics 00:28:24.886 rmmod nvme_keyring 00:28:24.886 02:03:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:24.886 02:03:10 -- nvmf/common.sh@123 -- # set -e 00:28:24.886 02:03:10 -- nvmf/common.sh@124 -- # return 0 00:28:24.886 02:03:10 -- nvmf/common.sh@477 -- # '[' -n 2264065 ']' 00:28:24.886 02:03:10 -- nvmf/common.sh@478 -- # killprocess 2264065 00:28:24.886 02:03:10 -- common/autotest_common.sh@926 -- # '[' -z 2264065 ']' 00:28:24.886 02:03:10 -- common/autotest_common.sh@930 -- # kill -0 2264065 00:28:24.886 02:03:10 -- common/autotest_common.sh@931 -- # uname 00:28:24.886 02:03:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:24.886 02:03:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2264065 00:28:24.886 02:03:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:24.886 02:03:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:24.886 02:03:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2264065' 00:28:24.886 killing process with pid 2264065 00:28:24.886 02:03:10 -- common/autotest_common.sh@945 -- # kill 2264065 00:28:24.886 02:03:10 -- common/autotest_common.sh@950 -- # wait 2264065 00:28:25.144 02:03:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:25.144 02:03:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:25.144 02:03:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:25.144 02:03:10 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:25.144 02:03:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:25.144 02:03:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.144 02:03:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.144 02:03:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.677 02:03:12 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:27.677 00:28:27.677 real 0m36.547s 00:28:27.677 user 2m6.048s 00:28:27.677 sys 0m6.835s 00:28:27.677 02:03:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:27.677 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:27.677 ************************************ 00:28:27.677 END TEST nvmf_failover 00:28:27.677 ************************************ 00:28:27.677 02:03:12 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:27.677 02:03:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:27.677 02:03:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:27.677 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:27.678 ************************************ 00:28:27.678 START TEST nvmf_discovery 00:28:27.678 ************************************ 00:28:27.678 02:03:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:28:27.678 * Looking for test storage... 00:28:27.678 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:27.678 02:03:12 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:27.678 02:03:12 -- nvmf/common.sh@7 -- # uname -s 00:28:27.678 02:03:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:27.678 02:03:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:27.678 02:03:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:27.678 02:03:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:27.678 02:03:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:27.678 02:03:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:27.678 02:03:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:27.678 02:03:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:27.678 02:03:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:27.678 02:03:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:27.678 02:03:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.678 02:03:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:27.678 02:03:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:27.678 02:03:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:27.678 02:03:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:27.678 02:03:12 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:27.678 02:03:12 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:27.678 02:03:12 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:27.678 02:03:12 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:27.678 02:03:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.678 02:03:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.678 02:03:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.678 02:03:12 -- paths/export.sh@5 -- # export PATH 00:28:27.678 02:03:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:27.678 02:03:12 -- nvmf/common.sh@46 -- # : 0 00:28:27.678 02:03:12 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:27.678 02:03:12 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:27.678 02:03:12 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:27.678 02:03:12 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:27.678 02:03:12 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:27.678 02:03:12 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:27.678 02:03:12 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:27.678 02:03:12 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:27.678 02:03:12 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:28:27.678 02:03:12 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:28:27.678 02:03:12 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:28:27.678 02:03:12 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:28:27.678 02:03:12 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:28:27.678 02:03:12 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:28:27.678 02:03:12 -- host/discovery.sh@25 -- # nvmftestinit 00:28:27.678 02:03:12 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:27.678 02:03:12 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:27.678 02:03:12 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:27.678 02:03:12 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:27.678 02:03:12 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:27.678 02:03:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:27.678 02:03:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:27.678 02:03:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:27.678 02:03:12 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:27.678 02:03:12 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:27.678 02:03:12 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:27.678 02:03:12 -- common/autotest_common.sh@10 -- # set +x 00:28:29.095 02:03:14 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:29.095 02:03:14 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:29.095 02:03:14 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:29.095 02:03:14 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:29.095 02:03:14 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:29.095 02:03:14 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:29.095 02:03:14 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:29.095 02:03:14 -- nvmf/common.sh@294 -- # net_devs=() 00:28:29.095 02:03:14 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:29.095 02:03:14 -- nvmf/common.sh@295 -- # e810=() 00:28:29.354 02:03:14 -- nvmf/common.sh@295 -- # local -ga e810 00:28:29.354 02:03:14 -- nvmf/common.sh@296 -- # x722=() 00:28:29.354 02:03:14 -- nvmf/common.sh@296 -- # local -ga x722 00:28:29.354 02:03:14 -- nvmf/common.sh@297 -- # mlx=() 00:28:29.354 02:03:14 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:29.354 02:03:14 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:29.354 02:03:14 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:29.354 02:03:14 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:29.354 02:03:14 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:29.354 02:03:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:29.354 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:29.354 02:03:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:29.354 02:03:14 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:29.354 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:29.354 02:03:14 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:29.354 02:03:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.354 02:03:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.354 02:03:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:29.354 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:29.354 02:03:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.354 02:03:14 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:29.354 02:03:14 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:29.354 02:03:14 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:29.354 02:03:14 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:29.354 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:29.354 02:03:14 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:29.354 02:03:14 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:29.354 02:03:14 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:29.354 02:03:14 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:29.354 02:03:14 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:29.354 02:03:14 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:29.354 02:03:14 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:29.354 02:03:14 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:29.354 02:03:14 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:29.354 02:03:14 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:29.355 02:03:14 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:29.355 02:03:14 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:29.355 02:03:14 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:29.355 02:03:14 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:29.355 02:03:14 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:29.355 02:03:14 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:29.355 02:03:14 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:29.355 02:03:14 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:29.355 02:03:14 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:29.355 02:03:14 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:29.355 02:03:14 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:29.355 02:03:14 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:29.355 02:03:14 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:29.355 02:03:14 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:29.355 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:29.355 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:28:29.355 00:28:29.355 --- 10.0.0.2 ping statistics --- 00:28:29.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.355 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:28:29.355 02:03:14 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:29.355 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:29.355 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:28:29.355 00:28:29.355 --- 10.0.0.1 ping statistics --- 00:28:29.355 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:29.355 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:28:29.355 02:03:14 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:29.355 02:03:14 -- nvmf/common.sh@410 -- # return 0 00:28:29.355 02:03:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:29.355 02:03:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:29.355 02:03:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:29.355 02:03:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:29.355 02:03:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:29.355 02:03:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:29.355 02:03:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:29.355 02:03:14 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:28:29.355 02:03:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:29.355 02:03:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:29.355 02:03:14 -- common/autotest_common.sh@10 -- # set +x 00:28:29.355 02:03:14 -- nvmf/common.sh@469 -- # nvmfpid=2270491 00:28:29.355 02:03:14 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:29.355 02:03:14 -- nvmf/common.sh@470 -- # waitforlisten 2270491 00:28:29.355 02:03:14 -- common/autotest_common.sh@819 -- # '[' -z 2270491 ']' 00:28:29.355 02:03:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:29.355 02:03:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:29.355 02:03:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:29.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:29.355 02:03:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:29.355 02:03:14 -- common/autotest_common.sh@10 -- # set +x 00:28:29.355 [2024-04-15 02:03:14.934466] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:29.355 [2024-04-15 02:03:14.934537] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:29.355 EAL: No free 2048 kB hugepages reported on node 1 00:28:29.355 [2024-04-15 02:03:15.001060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:29.613 [2024-04-15 02:03:15.092042] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:29.613 [2024-04-15 02:03:15.092215] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:29.613 [2024-04-15 02:03:15.092233] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:29.613 [2024-04-15 02:03:15.092246] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:29.613 [2024-04-15 02:03:15.092271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:30.548 02:03:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:30.548 02:03:15 -- common/autotest_common.sh@852 -- # return 0 00:28:30.548 02:03:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:30.548 02:03:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:30.548 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.548 02:03:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:30.549 02:03:15 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:30.549 02:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 [2024-04-15 02:03:15.906806] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:30.549 02:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.549 02:03:15 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:28:30.549 02:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 [2024-04-15 02:03:15.914971] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:30.549 02:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.549 02:03:15 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:28:30.549 02:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 null0 00:28:30.549 02:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.549 02:03:15 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:28:30.549 02:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 null1 00:28:30.549 02:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.549 02:03:15 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:28:30.549 02:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 02:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:30.549 02:03:15 -- host/discovery.sh@45 -- # hostpid=2270643 00:28:30.549 02:03:15 -- host/discovery.sh@46 -- # waitforlisten 2270643 /tmp/host.sock 00:28:30.549 02:03:15 -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:28:30.549 02:03:15 -- common/autotest_common.sh@819 -- # '[' -z 2270643 ']' 00:28:30.549 02:03:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:30.549 02:03:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:30.549 02:03:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:30.549 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:30.549 02:03:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:30.549 02:03:15 -- common/autotest_common.sh@10 -- # set +x 00:28:30.549 [2024-04-15 02:03:15.987643] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:30.549 [2024-04-15 02:03:15.987721] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2270643 ] 00:28:30.549 EAL: No free 2048 kB hugepages reported on node 1 00:28:30.549 [2024-04-15 02:03:16.049176] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:30.549 [2024-04-15 02:03:16.136381] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:30.549 [2024-04-15 02:03:16.136542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.483 02:03:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:31.483 02:03:16 -- common/autotest_common.sh@852 -- # return 0 00:28:31.483 02:03:16 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:31.483 02:03:16 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:28:31.483 02:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.483 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:28:31.483 02:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.483 02:03:16 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:28:31.483 02:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:16 -- host/discovery.sh@72 -- # notify_id=0 00:28:31.484 02:03:16 -- host/discovery.sh@78 -- # get_subsystem_names 00:28:31.484 02:03:16 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.484 02:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:16 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.484 02:03:16 -- host/discovery.sh@59 -- # sort 00:28:31.484 02:03:16 -- host/discovery.sh@59 -- # xargs 00:28:31.484 02:03:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:16 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:28:31.484 02:03:16 -- host/discovery.sh@79 -- # get_bdev_list 00:28:31.484 02:03:16 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.484 02:03:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:16 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.484 02:03:16 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:16 -- host/discovery.sh@55 -- # sort 00:28:31.484 02:03:16 -- host/discovery.sh@55 -- # xargs 00:28:31.484 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:17 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:28:31.484 02:03:17 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:28:31.484 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:17 -- host/discovery.sh@82 -- # get_subsystem_names 00:28:31.484 02:03:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.484 02:03:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.484 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:17 -- host/discovery.sh@59 -- # sort 00:28:31.484 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:17 -- host/discovery.sh@59 -- # xargs 00:28:31.484 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:17 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:28:31.484 02:03:17 -- host/discovery.sh@83 -- # get_bdev_list 00:28:31.484 02:03:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.484 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.484 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:17 -- host/discovery.sh@55 -- # sort 00:28:31.484 02:03:17 -- host/discovery.sh@55 -- # xargs 00:28:31.484 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.484 02:03:17 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:28:31.484 02:03:17 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:28:31.484 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.484 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.484 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@86 -- # get_subsystem_names 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # sort 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # xargs 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:28:31.743 02:03:17 -- host/discovery.sh@87 -- # get_bdev_list 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # sort 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # xargs 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:28:31.743 02:03:17 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 [2024-04-15 02:03:17.214573] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@92 -- # get_subsystem_names 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # sort 00:28:31.743 02:03:17 -- host/discovery.sh@59 -- # xargs 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:28:31.743 02:03:17 -- host/discovery.sh@93 -- # get_bdev_list 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # sort 00:28:31.743 02:03:17 -- host/discovery.sh@55 -- # xargs 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:28:31.743 02:03:17 -- host/discovery.sh@94 -- # get_notification_count 00:28:31.743 02:03:17 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- host/discovery.sh@74 -- # jq '. | length' 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@74 -- # notification_count=0 00:28:31.743 02:03:17 -- host/discovery.sh@75 -- # notify_id=0 00:28:31.743 02:03:17 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:28:31.743 02:03:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:31.743 02:03:17 -- common/autotest_common.sh@10 -- # set +x 00:28:31.743 02:03:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:31.743 02:03:17 -- host/discovery.sh@100 -- # sleep 1 00:28:32.682 [2024-04-15 02:03:17.998191] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:32.682 [2024-04-15 02:03:17.998218] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:32.682 [2024-04-15 02:03:17.998239] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:32.682 [2024-04-15 02:03:18.124655] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:32.682 [2024-04-15 02:03:18.186539] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:32.682 [2024-04-15 02:03:18.186566] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:32.941 02:03:18 -- host/discovery.sh@101 -- # get_subsystem_names 00:28:32.941 02:03:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:32.941 02:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.941 02:03:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:32.941 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 02:03:18 -- host/discovery.sh@59 -- # sort 00:28:32.941 02:03:18 -- host/discovery.sh@59 -- # xargs 00:28:32.941 02:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@102 -- # get_bdev_list 00:28:32.941 02:03:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:32.941 02:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.941 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 02:03:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:32.941 02:03:18 -- host/discovery.sh@55 -- # sort 00:28:32.941 02:03:18 -- host/discovery.sh@55 -- # xargs 00:28:32.941 02:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:28:32.941 02:03:18 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:32.941 02:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.941 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 02:03:18 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:32.941 02:03:18 -- host/discovery.sh@63 -- # sort -n 00:28:32.941 02:03:18 -- host/discovery.sh@63 -- # xargs 00:28:32.941 02:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@104 -- # get_notification_count 00:28:32.941 02:03:18 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:28:32.941 02:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.941 02:03:18 -- host/discovery.sh@74 -- # jq '. | length' 00:28:32.941 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 02:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@74 -- # notification_count=1 00:28:32.941 02:03:18 -- host/discovery.sh@75 -- # notify_id=1 00:28:32.941 02:03:18 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:28:32.941 02:03:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:32.941 02:03:18 -- common/autotest_common.sh@10 -- # set +x 00:28:32.941 02:03:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:32.941 02:03:18 -- host/discovery.sh@109 -- # sleep 1 00:28:33.878 02:03:19 -- host/discovery.sh@110 -- # get_bdev_list 00:28:33.878 02:03:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:33.878 02:03:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:33.878 02:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:33.878 02:03:19 -- common/autotest_common.sh@10 -- # set +x 00:28:33.878 02:03:19 -- host/discovery.sh@55 -- # sort 00:28:33.878 02:03:19 -- host/discovery.sh@55 -- # xargs 00:28:34.139 02:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.139 02:03:19 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:34.139 02:03:19 -- host/discovery.sh@111 -- # get_notification_count 00:28:34.139 02:03:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:28:34.139 02:03:19 -- host/discovery.sh@74 -- # jq '. | length' 00:28:34.139 02:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.139 02:03:19 -- common/autotest_common.sh@10 -- # set +x 00:28:34.139 02:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.139 02:03:19 -- host/discovery.sh@74 -- # notification_count=1 00:28:34.139 02:03:19 -- host/discovery.sh@75 -- # notify_id=2 00:28:34.139 02:03:19 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:28:34.139 02:03:19 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:28:34.139 02:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:34.139 02:03:19 -- common/autotest_common.sh@10 -- # set +x 00:28:34.139 [2024-04-15 02:03:19.609714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:34.139 [2024-04-15 02:03:19.610227] bdev_nvme.c:6682:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:34.139 [2024-04-15 02:03:19.610260] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:34.139 02:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:34.139 02:03:19 -- host/discovery.sh@117 -- # sleep 1 00:28:34.139 [2024-04-15 02:03:19.696532] bdev_nvme.c:6624:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:28:34.399 [2024-04-15 02:03:19.965902] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:34.399 [2024-04-15 02:03:19.965928] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:34.399 [2024-04-15 02:03:19.965939] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:35.340 02:03:20 -- host/discovery.sh@118 -- # get_subsystem_names 00:28:35.340 02:03:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:35.340 02:03:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:35.340 02:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.340 02:03:20 -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 02:03:20 -- host/discovery.sh@59 -- # sort 00:28:35.340 02:03:20 -- host/discovery.sh@59 -- # xargs 00:28:35.340 02:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@119 -- # get_bdev_list 00:28:35.340 02:03:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:35.340 02:03:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:35.340 02:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.340 02:03:20 -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 02:03:20 -- host/discovery.sh@55 -- # sort 00:28:35.340 02:03:20 -- host/discovery.sh@55 -- # xargs 00:28:35.340 02:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:28:35.340 02:03:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:35.340 02:03:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:35.340 02:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.340 02:03:20 -- host/discovery.sh@63 -- # sort -n 00:28:35.340 02:03:20 -- common/autotest_common.sh@10 -- # set +x 00:28:35.340 02:03:20 -- host/discovery.sh@63 -- # xargs 00:28:35.340 02:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:28:35.340 02:03:20 -- host/discovery.sh@121 -- # get_notification_count 00:28:35.340 02:03:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:35.340 02:03:20 -- host/discovery.sh@74 -- # jq '. | length' 00:28:35.340 02:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.341 02:03:20 -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 02:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.341 02:03:20 -- host/discovery.sh@74 -- # notification_count=0 00:28:35.341 02:03:20 -- host/discovery.sh@75 -- # notify_id=2 00:28:35.341 02:03:20 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:28:35.341 02:03:20 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:35.341 02:03:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:35.341 02:03:20 -- common/autotest_common.sh@10 -- # set +x 00:28:35.341 [2024-04-15 02:03:20.789582] bdev_nvme.c:6682:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:28:35.341 [2024-04-15 02:03:20.789625] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:35.341 02:03:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:35.341 02:03:20 -- host/discovery.sh@127 -- # sleep 1 00:28:35.341 [2024-04-15 02:03:20.797364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.341 [2024-04-15 02:03:20.797412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.341 [2024-04-15 02:03:20.797430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.341 [2024-04-15 02:03:20.797445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.341 [2024-04-15 02:03:20.797474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.341 [2024-04-15 02:03:20.797489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.341 [2024-04-15 02:03:20.797503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:35.341 [2024-04-15 02:03:20.797519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:35.341 [2024-04-15 02:03:20.797533] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.807361] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.817413] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.817824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.818073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.818105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.818122] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.818147] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.818200] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.818220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.818237] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.818258] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.827513] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.827865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.828113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.828141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.828163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.828187] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.828220] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.828239] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.828253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.828283] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.837589] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.837911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.838166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.838195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.838212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.838235] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.838278] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.838297] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.838312] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.838343] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.847662] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.847922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.848149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.848177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.848194] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.848217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.848237] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.848252] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.848266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.848285] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.857730] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.858063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.858313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.858341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.858374] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.858405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.858469] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.858488] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.858501] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.858534] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.867798] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:35.341 [2024-04-15 02:03:20.868108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.868340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:35.341 [2024-04-15 02:03:20.868367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c2ba0 with addr=10.0.0.2, port=4420 00:28:35.341 [2024-04-15 02:03:20.868383] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12c2ba0 is same with the state(5) to be set 00:28:35.341 [2024-04-15 02:03:20.868405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12c2ba0 (9): Bad file descriptor 00:28:35.341 [2024-04-15 02:03:20.868426] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:28:35.341 [2024-04-15 02:03:20.868441] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:28:35.341 [2024-04-15 02:03:20.868455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:28:35.341 [2024-04-15 02:03:20.868502] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:35.341 [2024-04-15 02:03:20.876563] bdev_nvme.c:6487:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:28:35.341 [2024-04-15 02:03:20.876607] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:36.282 02:03:21 -- host/discovery.sh@128 -- # get_subsystem_names 00:28:36.282 02:03:21 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:36.282 02:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.282 02:03:21 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:36.282 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:28:36.282 02:03:21 -- host/discovery.sh@59 -- # sort 00:28:36.282 02:03:21 -- host/discovery.sh@59 -- # xargs 00:28:36.282 02:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@129 -- # get_bdev_list 00:28:36.282 02:03:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:36.282 02:03:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:36.282 02:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.282 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:28:36.282 02:03:21 -- host/discovery.sh@55 -- # sort 00:28:36.282 02:03:21 -- host/discovery.sh@55 -- # xargs 00:28:36.282 02:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:28:36.282 02:03:21 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:28:36.282 02:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.282 02:03:21 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:28:36.282 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:28:36.282 02:03:21 -- host/discovery.sh@63 -- # sort -n 00:28:36.282 02:03:21 -- host/discovery.sh@63 -- # xargs 00:28:36.282 02:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:28:36.282 02:03:21 -- host/discovery.sh@131 -- # get_notification_count 00:28:36.282 02:03:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:36.282 02:03:21 -- host/discovery.sh@74 -- # jq '. | length' 00:28:36.282 02:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.282 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:28:36.540 02:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.540 02:03:21 -- host/discovery.sh@74 -- # notification_count=0 00:28:36.540 02:03:21 -- host/discovery.sh@75 -- # notify_id=2 00:28:36.540 02:03:21 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:28:36.540 02:03:21 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:28:36.540 02:03:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:36.540 02:03:21 -- common/autotest_common.sh@10 -- # set +x 00:28:36.540 02:03:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:36.540 02:03:21 -- host/discovery.sh@135 -- # sleep 1 00:28:37.473 02:03:22 -- host/discovery.sh@136 -- # get_subsystem_names 00:28:37.473 02:03:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:28:37.473 02:03:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:28:37.473 02:03:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.473 02:03:22 -- common/autotest_common.sh@10 -- # set +x 00:28:37.473 02:03:22 -- host/discovery.sh@59 -- # sort 00:28:37.473 02:03:22 -- host/discovery.sh@59 -- # xargs 00:28:37.473 02:03:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.473 02:03:23 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:28:37.473 02:03:23 -- host/discovery.sh@137 -- # get_bdev_list 00:28:37.473 02:03:23 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:37.473 02:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.473 02:03:23 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:37.473 02:03:23 -- common/autotest_common.sh@10 -- # set +x 00:28:37.473 02:03:23 -- host/discovery.sh@55 -- # sort 00:28:37.473 02:03:23 -- host/discovery.sh@55 -- # xargs 00:28:37.473 02:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.473 02:03:23 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:28:37.473 02:03:23 -- host/discovery.sh@138 -- # get_notification_count 00:28:37.473 02:03:23 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:28:37.473 02:03:23 -- host/discovery.sh@74 -- # jq '. | length' 00:28:37.473 02:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.473 02:03:23 -- common/autotest_common.sh@10 -- # set +x 00:28:37.473 02:03:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:37.473 02:03:23 -- host/discovery.sh@74 -- # notification_count=2 00:28:37.473 02:03:23 -- host/discovery.sh@75 -- # notify_id=4 00:28:37.473 02:03:23 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:28:37.473 02:03:23 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:37.473 02:03:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:37.473 02:03:23 -- common/autotest_common.sh@10 -- # set +x 00:28:38.851 [2024-04-15 02:03:24.174435] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:38.851 [2024-04-15 02:03:24.174465] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:38.851 [2024-04-15 02:03:24.174490] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:38.851 [2024-04-15 02:03:24.260739] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:28:38.851 [2024-04-15 02:03:24.366120] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:38.851 [2024-04-15 02:03:24.366152] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:28:38.851 02:03:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.851 02:03:24 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.851 02:03:24 -- common/autotest_common.sh@640 -- # local es=0 00:28:38.851 02:03:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.851 02:03:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:38.851 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.851 02:03:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:38.851 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.851 02:03:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.851 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.851 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:38.851 request: 00:28:38.851 { 00:28:38.851 "name": "nvme", 00:28:38.851 "trtype": "tcp", 00:28:38.851 "traddr": "10.0.0.2", 00:28:38.851 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:38.851 "adrfam": "ipv4", 00:28:38.851 "trsvcid": "8009", 00:28:38.851 "wait_for_attach": true, 00:28:38.851 "method": "bdev_nvme_start_discovery", 00:28:38.851 "req_id": 1 00:28:38.851 } 00:28:38.851 Got JSON-RPC error response 00:28:38.851 response: 00:28:38.851 { 00:28:38.851 "code": -17, 00:28:38.851 "message": "File exists" 00:28:38.851 } 00:28:38.851 02:03:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:38.851 02:03:24 -- common/autotest_common.sh@643 -- # es=1 00:28:38.851 02:03:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:38.851 02:03:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:38.851 02:03:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:38.851 02:03:24 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:28:38.851 02:03:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:38.852 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:38.852 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # sort 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # xargs 00:28:38.852 02:03:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.852 02:03:24 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:28:38.852 02:03:24 -- host/discovery.sh@147 -- # get_bdev_list 00:28:38.852 02:03:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:38.852 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.852 02:03:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:38.852 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:38.852 02:03:24 -- host/discovery.sh@55 -- # sort 00:28:38.852 02:03:24 -- host/discovery.sh@55 -- # xargs 00:28:38.852 02:03:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:38.852 02:03:24 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:38.852 02:03:24 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.852 02:03:24 -- common/autotest_common.sh@640 -- # local es=0 00:28:38.852 02:03:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.852 02:03:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:38.852 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.852 02:03:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:38.852 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:38.852 02:03:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:28:38.852 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.852 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:38.852 request: 00:28:38.852 { 00:28:38.852 "name": "nvme_second", 00:28:38.852 "trtype": "tcp", 00:28:38.852 "traddr": "10.0.0.2", 00:28:38.852 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:38.852 "adrfam": "ipv4", 00:28:38.852 "trsvcid": "8009", 00:28:38.852 "wait_for_attach": true, 00:28:38.852 "method": "bdev_nvme_start_discovery", 00:28:38.852 "req_id": 1 00:28:38.852 } 00:28:38.852 Got JSON-RPC error response 00:28:38.852 response: 00:28:38.852 { 00:28:38.852 "code": -17, 00:28:38.852 "message": "File exists" 00:28:38.852 } 00:28:38.852 02:03:24 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:38.852 02:03:24 -- common/autotest_common.sh@643 -- # es=1 00:28:38.852 02:03:24 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:38.852 02:03:24 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:38.852 02:03:24 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:38.852 02:03:24 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:38.852 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:38.852 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # sort 00:28:38.852 02:03:24 -- host/discovery.sh@67 -- # xargs 00:28:38.852 02:03:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.111 02:03:24 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:28:39.111 02:03:24 -- host/discovery.sh@153 -- # get_bdev_list 00:28:39.111 02:03:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:39.111 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.111 02:03:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:28:39.111 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:39.111 02:03:24 -- host/discovery.sh@55 -- # sort 00:28:39.111 02:03:24 -- host/discovery.sh@55 -- # xargs 00:28:39.111 02:03:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:39.111 02:03:24 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:28:39.111 02:03:24 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:39.111 02:03:24 -- common/autotest_common.sh@640 -- # local es=0 00:28:39.111 02:03:24 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:39.111 02:03:24 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:28:39.111 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:39.111 02:03:24 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:28:39.111 02:03:24 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:28:39.111 02:03:24 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:28:39.111 02:03:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:39.111 02:03:24 -- common/autotest_common.sh@10 -- # set +x 00:28:40.048 [2024-04-15 02:03:25.561611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.048 [2024-04-15 02:03:25.561938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.048 [2024-04-15 02:03:25.561971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bed00 with addr=10.0.0.2, port=8010 00:28:40.048 [2024-04-15 02:03:25.562002] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:40.048 [2024-04-15 02:03:25.562018] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:40.048 [2024-04-15 02:03:25.562034] bdev_nvme.c:6762:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:40.983 [2024-04-15 02:03:26.563989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.983 [2024-04-15 02:03:26.564285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:40.983 [2024-04-15 02:03:26.564314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c39a0 with addr=10.0.0.2, port=8010 00:28:40.983 [2024-04-15 02:03:26.564337] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:40.983 [2024-04-15 02:03:26.564351] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:40.983 [2024-04-15 02:03:26.564364] bdev_nvme.c:6762:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:28:41.922 [2024-04-15 02:03:27.566163] bdev_nvme.c:6743:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:28:41.922 request: 00:28:41.922 { 00:28:41.922 "name": "nvme_second", 00:28:41.922 "trtype": "tcp", 00:28:41.922 "traddr": "10.0.0.2", 00:28:41.922 "hostnqn": "nqn.2021-12.io.spdk:test", 00:28:41.922 "adrfam": "ipv4", 00:28:41.922 "trsvcid": "8010", 00:28:41.922 "attach_timeout_ms": 3000, 00:28:41.922 "method": "bdev_nvme_start_discovery", 00:28:41.922 "req_id": 1 00:28:41.922 } 00:28:41.922 Got JSON-RPC error response 00:28:41.922 response: 00:28:41.922 { 00:28:41.922 "code": -110, 00:28:41.922 "message": "Connection timed out" 00:28:41.922 } 00:28:41.922 02:03:27 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:28:42.181 02:03:27 -- common/autotest_common.sh@643 -- # es=1 00:28:42.181 02:03:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:28:42.181 02:03:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:28:42.181 02:03:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:28:42.181 02:03:27 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:28:42.181 02:03:27 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:28:42.181 02:03:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:42.181 02:03:27 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:28:42.181 02:03:27 -- common/autotest_common.sh@10 -- # set +x 00:28:42.181 02:03:27 -- host/discovery.sh@67 -- # sort 00:28:42.181 02:03:27 -- host/discovery.sh@67 -- # xargs 00:28:42.181 02:03:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:42.181 02:03:27 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:28:42.181 02:03:27 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:28:42.181 02:03:27 -- host/discovery.sh@162 -- # kill 2270643 00:28:42.181 02:03:27 -- host/discovery.sh@163 -- # nvmftestfini 00:28:42.181 02:03:27 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:42.181 02:03:27 -- nvmf/common.sh@116 -- # sync 00:28:42.181 02:03:27 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:28:42.181 02:03:27 -- nvmf/common.sh@119 -- # set +e 00:28:42.181 02:03:27 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:42.181 02:03:27 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:28:42.181 rmmod nvme_tcp 00:28:42.181 rmmod nvme_fabrics 00:28:42.181 rmmod nvme_keyring 00:28:42.181 02:03:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:42.181 02:03:27 -- nvmf/common.sh@123 -- # set -e 00:28:42.181 02:03:27 -- nvmf/common.sh@124 -- # return 0 00:28:42.181 02:03:27 -- nvmf/common.sh@477 -- # '[' -n 2270491 ']' 00:28:42.181 02:03:27 -- nvmf/common.sh@478 -- # killprocess 2270491 00:28:42.181 02:03:27 -- common/autotest_common.sh@926 -- # '[' -z 2270491 ']' 00:28:42.181 02:03:27 -- common/autotest_common.sh@930 -- # kill -0 2270491 00:28:42.181 02:03:27 -- common/autotest_common.sh@931 -- # uname 00:28:42.181 02:03:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:28:42.181 02:03:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2270491 00:28:42.181 02:03:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:28:42.181 02:03:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:28:42.181 02:03:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2270491' 00:28:42.181 killing process with pid 2270491 00:28:42.181 02:03:27 -- common/autotest_common.sh@945 -- # kill 2270491 00:28:42.181 02:03:27 -- common/autotest_common.sh@950 -- # wait 2270491 00:28:42.440 02:03:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:42.440 02:03:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:28:42.440 02:03:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:28:42.440 02:03:27 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:42.440 02:03:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:28:42.440 02:03:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.440 02:03:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:42.440 02:03:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.349 02:03:29 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:28:44.609 00:28:44.609 real 0m17.240s 00:28:44.609 user 0m26.761s 00:28:44.609 sys 0m2.873s 00:28:44.609 02:03:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:44.609 02:03:29 -- common/autotest_common.sh@10 -- # set +x 00:28:44.609 ************************************ 00:28:44.609 END TEST nvmf_discovery 00:28:44.609 ************************************ 00:28:44.609 02:03:30 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:44.609 02:03:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:28:44.609 02:03:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:28:44.609 02:03:30 -- common/autotest_common.sh@10 -- # set +x 00:28:44.609 ************************************ 00:28:44.609 START TEST nvmf_discovery_remove_ifc 00:28:44.609 ************************************ 00:28:44.609 02:03:30 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:28:44.609 * Looking for test storage... 00:28:44.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.609 02:03:30 -- nvmf/common.sh@7 -- # uname -s 00:28:44.609 02:03:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.609 02:03:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.609 02:03:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.609 02:03:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.609 02:03:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.609 02:03:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.609 02:03:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.609 02:03:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.609 02:03:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.609 02:03:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.609 02:03:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.609 02:03:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:44.609 02:03:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.609 02:03:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.609 02:03:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.609 02:03:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.609 02:03:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.609 02:03:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.609 02:03:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.609 02:03:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.609 02:03:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.609 02:03:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.609 02:03:30 -- paths/export.sh@5 -- # export PATH 00:28:44.609 02:03:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.609 02:03:30 -- nvmf/common.sh@46 -- # : 0 00:28:44.609 02:03:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:44.609 02:03:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:44.609 02:03:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:44.609 02:03:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.609 02:03:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.609 02:03:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:44.609 02:03:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:44.609 02:03:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:28:44.609 02:03:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:28:44.609 02:03:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:28:44.609 02:03:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.609 02:03:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:44.609 02:03:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:44.609 02:03:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:44.609 02:03:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.609 02:03:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:44.609 02:03:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.609 02:03:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:44.609 02:03:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:44.609 02:03:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:44.609 02:03:30 -- common/autotest_common.sh@10 -- # set +x 00:28:46.542 02:03:32 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:46.542 02:03:32 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:46.542 02:03:32 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:46.542 02:03:32 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:46.542 02:03:32 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:46.542 02:03:32 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:46.542 02:03:32 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:46.542 02:03:32 -- nvmf/common.sh@294 -- # net_devs=() 00:28:46.542 02:03:32 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:46.542 02:03:32 -- nvmf/common.sh@295 -- # e810=() 00:28:46.542 02:03:32 -- nvmf/common.sh@295 -- # local -ga e810 00:28:46.542 02:03:32 -- nvmf/common.sh@296 -- # x722=() 00:28:46.542 02:03:32 -- nvmf/common.sh@296 -- # local -ga x722 00:28:46.542 02:03:32 -- nvmf/common.sh@297 -- # mlx=() 00:28:46.542 02:03:32 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:46.542 02:03:32 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:46.542 02:03:32 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:46.542 02:03:32 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:28:46.542 02:03:32 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:46.542 02:03:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.542 02:03:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:46.542 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:46.542 02:03:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:46.542 02:03:32 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:46.542 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:46.542 02:03:32 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:46.542 02:03:32 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:28:46.542 02:03:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.542 02:03:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.542 02:03:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.542 02:03:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.542 02:03:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:46.542 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:46.542 02:03:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.542 02:03:32 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:46.542 02:03:32 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:46.542 02:03:32 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:46.543 02:03:32 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:46.543 02:03:32 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:46.543 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:46.543 02:03:32 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:46.543 02:03:32 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:46.543 02:03:32 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:46.543 02:03:32 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:46.543 02:03:32 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:28:46.543 02:03:32 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:28:46.543 02:03:32 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:46.543 02:03:32 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:46.543 02:03:32 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:46.543 02:03:32 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:28:46.543 02:03:32 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:46.543 02:03:32 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:46.543 02:03:32 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:28:46.543 02:03:32 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:46.543 02:03:32 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:46.543 02:03:32 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:28:46.543 02:03:32 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:28:46.543 02:03:32 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:28:46.543 02:03:32 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:46.543 02:03:32 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:46.543 02:03:32 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:46.543 02:03:32 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:28:46.543 02:03:32 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:46.543 02:03:32 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:46.543 02:03:32 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:46.543 02:03:32 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:28:46.543 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:46.543 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:28:46.543 00:28:46.543 --- 10.0.0.2 ping statistics --- 00:28:46.543 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.543 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:28:46.543 02:03:32 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:46.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:46.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:28:46.806 00:28:46.806 --- 10.0.0.1 ping statistics --- 00:28:46.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:46.806 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:28:46.806 02:03:32 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:46.806 02:03:32 -- nvmf/common.sh@410 -- # return 0 00:28:46.806 02:03:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:46.806 02:03:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:46.806 02:03:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:28:46.806 02:03:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:28:46.806 02:03:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:46.806 02:03:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:28:46.806 02:03:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:28:46.806 02:03:32 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:28:46.806 02:03:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:46.806 02:03:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:28:46.806 02:03:32 -- common/autotest_common.sh@10 -- # set +x 00:28:46.806 02:03:32 -- nvmf/common.sh@469 -- # nvmfpid=2274237 00:28:46.806 02:03:32 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:28:46.806 02:03:32 -- nvmf/common.sh@470 -- # waitforlisten 2274237 00:28:46.806 02:03:32 -- common/autotest_common.sh@819 -- # '[' -z 2274237 ']' 00:28:46.806 02:03:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:46.806 02:03:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:46.806 02:03:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:46.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:46.806 02:03:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:46.806 02:03:32 -- common/autotest_common.sh@10 -- # set +x 00:28:46.806 [2024-04-15 02:03:32.266803] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:46.807 [2024-04-15 02:03:32.266901] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:46.807 EAL: No free 2048 kB hugepages reported on node 1 00:28:46.807 [2024-04-15 02:03:32.335904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.807 [2024-04-15 02:03:32.422755] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:46.807 [2024-04-15 02:03:32.422931] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:46.807 [2024-04-15 02:03:32.422951] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:46.807 [2024-04-15 02:03:32.422966] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:46.807 [2024-04-15 02:03:32.422997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.743 02:03:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:47.743 02:03:33 -- common/autotest_common.sh@852 -- # return 0 00:28:47.743 02:03:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:47.743 02:03:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:28:47.743 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:47.743 02:03:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.743 02:03:33 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:28:47.743 02:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:47.743 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:47.743 [2024-04-15 02:03:33.237899] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.743 [2024-04-15 02:03:33.246075] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:28:47.743 null0 00:28:47.743 [2024-04-15 02:03:33.278014] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.743 02:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:47.743 02:03:33 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2274392 00:28:47.743 02:03:33 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:28:47.744 02:03:33 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2274392 /tmp/host.sock 00:28:47.744 02:03:33 -- common/autotest_common.sh@819 -- # '[' -z 2274392 ']' 00:28:47.744 02:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:28:47.744 02:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:28:47.744 02:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:28:47.744 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:28:47.744 02:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:28:47.744 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:47.744 [2024-04-15 02:03:33.337622] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:28:47.744 [2024-04-15 02:03:33.337699] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2274392 ] 00:28:47.744 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.004 [2024-04-15 02:03:33.397888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.004 [2024-04-15 02:03:33.481423] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:48.004 [2024-04-15 02:03:33.481581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.004 02:03:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:28:48.004 02:03:33 -- common/autotest_common.sh@852 -- # return 0 00:28:48.004 02:03:33 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:48.004 02:03:33 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:28:48.004 02:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.004 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:48.004 02:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.004 02:03:33 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:28:48.004 02:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.004 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:48.004 02:03:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:48.004 02:03:33 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:28:48.004 02:03:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:48.004 02:03:33 -- common/autotest_common.sh@10 -- # set +x 00:28:49.386 [2024-04-15 02:03:34.657178] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:49.386 [2024-04-15 02:03:34.657224] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:49.386 [2024-04-15 02:03:34.657245] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:49.386 [2024-04-15 02:03:34.743522] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:28:49.386 [2024-04-15 02:03:34.929775] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:49.386 [2024-04-15 02:03:34.929835] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:49.386 [2024-04-15 02:03:34.929877] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:49.386 [2024-04-15 02:03:34.929906] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:28:49.386 [2024-04-15 02:03:34.929944] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:49.386 02:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:49.386 02:03:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.386 02:03:34 -- common/autotest_common.sh@10 -- # set +x 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:49.386 [2024-04-15 02:03:34.934982] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a98420 was disconnected and freed. delete nvme_qpair. 00:28:49.386 02:03:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:28:49.386 02:03:34 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:28:49.386 02:03:35 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:28:49.386 02:03:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:49.387 02:03:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:49.387 02:03:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:49.387 02:03:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:49.387 02:03:35 -- common/autotest_common.sh@10 -- # set +x 00:28:49.387 02:03:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:49.387 02:03:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:49.645 02:03:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:49.645 02:03:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:49.645 02:03:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:50.582 02:03:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:50.582 02:03:36 -- common/autotest_common.sh@10 -- # set +x 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:50.582 02:03:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:50.582 02:03:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:51.520 02:03:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:51.520 02:03:37 -- common/autotest_common.sh@10 -- # set +x 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:51.520 02:03:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:51.520 02:03:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:52.899 02:03:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:52.899 02:03:38 -- common/autotest_common.sh@10 -- # set +x 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:52.899 02:03:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:52.899 02:03:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:53.834 02:03:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:53.834 02:03:39 -- common/autotest_common.sh@10 -- # set +x 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:53.834 02:03:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:53.834 02:03:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:54.774 02:03:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:54.774 02:03:40 -- common/autotest_common.sh@10 -- # set +x 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:54.774 02:03:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:54.774 02:03:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:54.774 [2024-04-15 02:03:40.370709] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:28:54.774 [2024-04-15 02:03:40.370783] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.774 [2024-04-15 02:03:40.370808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.774 [2024-04-15 02:03:40.370829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.774 [2024-04-15 02:03:40.370845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.774 [2024-04-15 02:03:40.370861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.774 [2024-04-15 02:03:40.370876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.774 [2024-04-15 02:03:40.370893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.774 [2024-04-15 02:03:40.370907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.774 [2024-04-15 02:03:40.370923] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.774 [2024-04-15 02:03:40.370938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.774 [2024-04-15 02:03:40.370952] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e870 is same with the state(5) to be set 00:28:54.774 [2024-04-15 02:03:40.380726] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e870 (9): Bad file descriptor 00:28:54.774 [2024-04-15 02:03:40.390777] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:55.709 02:03:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:55.709 02:03:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:55.709 02:03:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:55.709 02:03:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:55.709 02:03:41 -- common/autotest_common.sh@10 -- # set +x 00:28:55.709 02:03:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:55.709 02:03:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:55.969 [2024-04-15 02:03:41.427073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:56.908 [2024-04-15 02:03:42.451111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:28:56.908 [2024-04-15 02:03:42.451168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a5e870 with addr=10.0.0.2, port=4420 00:28:56.908 [2024-04-15 02:03:42.451190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5e870 is same with the state(5) to be set 00:28:56.908 [2024-04-15 02:03:42.451571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5e870 (9): Bad file descriptor 00:28:56.908 [2024-04-15 02:03:42.451610] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:56.908 [2024-04-15 02:03:42.451648] bdev_nvme.c:6451:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:28:56.908 [2024-04-15 02:03:42.451683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.908 [2024-04-15 02:03:42.451704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.908 [2024-04-15 02:03:42.451722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.908 [2024-04-15 02:03:42.451738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.908 [2024-04-15 02:03:42.451760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.908 [2024-04-15 02:03:42.451777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.908 [2024-04-15 02:03:42.451794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.908 [2024-04-15 02:03:42.451809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.908 [2024-04-15 02:03:42.451825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:56.908 [2024-04-15 02:03:42.451841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.908 [2024-04-15 02:03:42.451856] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:28:56.908 [2024-04-15 02:03:42.452225] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a5ec80 (9): Bad file descriptor 00:28:56.908 [2024-04-15 02:03:42.453239] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:28:56.908 [2024-04-15 02:03:42.453261] nvme_ctrlr.c:1135:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:28:56.908 02:03:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:56.908 02:03:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:28:56.908 02:03:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:57.842 02:03:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:57.842 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:57.842 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:57.842 02:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:57.842 02:03:43 -- common/autotest_common.sh@10 -- # set +x 00:28:57.842 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:57.842 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:57.842 02:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:58.101 02:03:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:58.101 02:03:43 -- common/autotest_common.sh@10 -- # set +x 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:58.101 02:03:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:58.101 02:03:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:59.037 [2024-04-15 02:03:44.513434] bdev_nvme.c:6700:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:28:59.037 [2024-04-15 02:03:44.513461] bdev_nvme.c:6780:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:28:59.037 [2024-04-15 02:03:44.513485] bdev_nvme.c:6663:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:28:59.037 02:03:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:28:59.037 02:03:44 -- common/autotest_common.sh@10 -- # set +x 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:28:59.037 02:03:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:28:59.037 02:03:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:28:59.037 [2024-04-15 02:03:44.640917] bdev_nvme.c:6629:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:28:59.295 [2024-04-15 02:03:44.864430] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:28:59.295 [2024-04-15 02:03:44.864480] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:28:59.295 [2024-04-15 02:03:44.864524] bdev_nvme.c:7489:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:28:59.295 [2024-04-15 02:03:44.864548] bdev_nvme.c:6519:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:28:59.295 [2024-04-15 02:03:44.864564] bdev_nvme.c:6478:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:28:59.295 [2024-04-15 02:03:44.871857] bdev_nvme.c:1581:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a6e200 was disconnected and freed. delete nvme_qpair. 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:29:00.234 02:03:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:00.234 02:03:45 -- common/autotest_common.sh@10 -- # set +x 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:29:00.234 02:03:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:29:00.234 02:03:45 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2274392 00:29:00.234 02:03:45 -- common/autotest_common.sh@926 -- # '[' -z 2274392 ']' 00:29:00.234 02:03:45 -- common/autotest_common.sh@930 -- # kill -0 2274392 00:29:00.234 02:03:45 -- common/autotest_common.sh@931 -- # uname 00:29:00.234 02:03:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:00.234 02:03:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2274392 00:29:00.234 02:03:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:00.234 02:03:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:00.234 02:03:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2274392' 00:29:00.234 killing process with pid 2274392 00:29:00.234 02:03:45 -- common/autotest_common.sh@945 -- # kill 2274392 00:29:00.234 02:03:45 -- common/autotest_common.sh@950 -- # wait 2274392 00:29:00.493 02:03:45 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:29:00.493 02:03:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:00.493 02:03:45 -- nvmf/common.sh@116 -- # sync 00:29:00.493 02:03:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:00.493 02:03:45 -- nvmf/common.sh@119 -- # set +e 00:29:00.493 02:03:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:00.493 02:03:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:00.493 rmmod nvme_tcp 00:29:00.493 rmmod nvme_fabrics 00:29:00.493 rmmod nvme_keyring 00:29:00.493 02:03:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:00.493 02:03:45 -- nvmf/common.sh@123 -- # set -e 00:29:00.493 02:03:45 -- nvmf/common.sh@124 -- # return 0 00:29:00.493 02:03:45 -- nvmf/common.sh@477 -- # '[' -n 2274237 ']' 00:29:00.493 02:03:45 -- nvmf/common.sh@478 -- # killprocess 2274237 00:29:00.493 02:03:45 -- common/autotest_common.sh@926 -- # '[' -z 2274237 ']' 00:29:00.493 02:03:45 -- common/autotest_common.sh@930 -- # kill -0 2274237 00:29:00.493 02:03:45 -- common/autotest_common.sh@931 -- # uname 00:29:00.493 02:03:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:00.493 02:03:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2274237 00:29:00.493 02:03:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:00.493 02:03:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:00.493 02:03:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2274237' 00:29:00.493 killing process with pid 2274237 00:29:00.493 02:03:45 -- common/autotest_common.sh@945 -- # kill 2274237 00:29:00.493 02:03:45 -- common/autotest_common.sh@950 -- # wait 2274237 00:29:00.752 02:03:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:00.752 02:03:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:00.752 02:03:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:00.752 02:03:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.752 02:03:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:00.752 02:03:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.752 02:03:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.752 02:03:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.657 02:03:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:02.658 00:29:02.658 real 0m18.214s 00:29:02.658 user 0m25.219s 00:29:02.658 sys 0m2.981s 00:29:02.658 02:03:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:02.658 02:03:48 -- common/autotest_common.sh@10 -- # set +x 00:29:02.658 ************************************ 00:29:02.658 END TEST nvmf_discovery_remove_ifc 00:29:02.658 ************************************ 00:29:02.658 02:03:48 -- nvmf/nvmf.sh@105 -- # [[ tcp == \t\c\p ]] 00:29:02.658 02:03:48 -- nvmf/nvmf.sh@106 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:02.658 02:03:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:02.658 02:03:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:02.658 02:03:48 -- common/autotest_common.sh@10 -- # set +x 00:29:02.658 ************************************ 00:29:02.658 START TEST nvmf_digest 00:29:02.658 ************************************ 00:29:02.658 02:03:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:02.954 * Looking for test storage... 00:29:02.954 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:02.954 02:03:48 -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.954 02:03:48 -- nvmf/common.sh@7 -- # uname -s 00:29:02.954 02:03:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.954 02:03:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.954 02:03:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.954 02:03:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.954 02:03:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:02.954 02:03:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:02.954 02:03:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.954 02:03:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:02.954 02:03:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.954 02:03:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:02.954 02:03:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.954 02:03:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:02.954 02:03:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.954 02:03:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:02.954 02:03:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:02.954 02:03:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.954 02:03:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.954 02:03:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.954 02:03:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.954 02:03:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.954 02:03:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.954 02:03:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.954 02:03:48 -- paths/export.sh@5 -- # export PATH 00:29:02.954 02:03:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.954 02:03:48 -- nvmf/common.sh@46 -- # : 0 00:29:02.954 02:03:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:02.954 02:03:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:02.954 02:03:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:02.954 02:03:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.954 02:03:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.954 02:03:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:02.954 02:03:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:02.954 02:03:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:02.954 02:03:48 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:02.954 02:03:48 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:02.954 02:03:48 -- host/digest.sh@16 -- # runtime=2 00:29:02.954 02:03:48 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:29:02.954 02:03:48 -- host/digest.sh@132 -- # nvmftestinit 00:29:02.954 02:03:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:02.954 02:03:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.954 02:03:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:02.954 02:03:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:02.954 02:03:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:02.954 02:03:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:02.954 02:03:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:02.954 02:03:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:02.954 02:03:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:02.954 02:03:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:02.954 02:03:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:02.954 02:03:48 -- common/autotest_common.sh@10 -- # set +x 00:29:04.860 02:03:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:04.860 02:03:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:04.860 02:03:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:04.860 02:03:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:04.860 02:03:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:04.860 02:03:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:04.860 02:03:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:04.860 02:03:50 -- nvmf/common.sh@294 -- # net_devs=() 00:29:04.860 02:03:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:04.860 02:03:50 -- nvmf/common.sh@295 -- # e810=() 00:29:04.860 02:03:50 -- nvmf/common.sh@295 -- # local -ga e810 00:29:04.860 02:03:50 -- nvmf/common.sh@296 -- # x722=() 00:29:04.860 02:03:50 -- nvmf/common.sh@296 -- # local -ga x722 00:29:04.860 02:03:50 -- nvmf/common.sh@297 -- # mlx=() 00:29:04.860 02:03:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:04.860 02:03:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:04.860 02:03:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:04.860 02:03:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:04.861 02:03:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:04.861 02:03:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:04.861 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:04.861 02:03:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:04.861 02:03:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:04.861 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:04.861 02:03:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:04.861 02:03:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.861 02:03:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.861 02:03:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:04.861 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:04.861 02:03:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.861 02:03:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:04.861 02:03:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:04.861 02:03:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:04.861 02:03:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:04.861 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:04.861 02:03:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:04.861 02:03:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:04.861 02:03:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:04.861 02:03:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:04.861 02:03:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:04.861 02:03:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:04.861 02:03:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:04.861 02:03:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:04.861 02:03:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:04.861 02:03:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:04.861 02:03:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:04.861 02:03:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:04.861 02:03:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:04.861 02:03:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:04.861 02:03:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:04.861 02:03:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:04.861 02:03:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:04.861 02:03:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:04.861 02:03:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:04.861 02:03:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:04.861 02:03:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:04.861 02:03:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:04.861 02:03:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:04.861 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:04.861 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:29:04.861 00:29:04.861 --- 10.0.0.2 ping statistics --- 00:29:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.861 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:29:04.861 02:03:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:04.861 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:04.861 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:29:04.861 00:29:04.861 --- 10.0.0.1 ping statistics --- 00:29:04.861 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:04.861 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:29:04.861 02:03:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:04.861 02:03:50 -- nvmf/common.sh@410 -- # return 0 00:29:04.861 02:03:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:04.861 02:03:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:04.861 02:03:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:04.861 02:03:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:04.861 02:03:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:04.861 02:03:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:04.861 02:03:50 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:04.861 02:03:50 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:29:04.861 02:03:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:04.861 02:03:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:04.861 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:04.861 ************************************ 00:29:04.861 START TEST nvmf_digest_clean 00:29:04.861 ************************************ 00:29:04.861 02:03:50 -- common/autotest_common.sh@1104 -- # run_digest 00:29:04.861 02:03:50 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:29:04.861 02:03:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:04.861 02:03:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:04.861 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:04.861 02:03:50 -- nvmf/common.sh@469 -- # nvmfpid=2277905 00:29:04.861 02:03:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:04.861 02:03:50 -- nvmf/common.sh@470 -- # waitforlisten 2277905 00:29:04.861 02:03:50 -- common/autotest_common.sh@819 -- # '[' -z 2277905 ']' 00:29:04.861 02:03:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.861 02:03:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:04.861 02:03:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.861 02:03:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:04.861 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:04.861 [2024-04-15 02:03:50.414435] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:04.861 [2024-04-15 02:03:50.414523] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.861 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.861 [2024-04-15 02:03:50.479442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.121 [2024-04-15 02:03:50.563605] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:05.121 [2024-04-15 02:03:50.563755] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:05.121 [2024-04-15 02:03:50.563771] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:05.121 [2024-04-15 02:03:50.563783] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:05.121 [2024-04-15 02:03:50.563833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:05.121 02:03:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:05.121 02:03:50 -- common/autotest_common.sh@852 -- # return 0 00:29:05.121 02:03:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:05.121 02:03:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:05.121 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:05.121 02:03:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:05.121 02:03:50 -- host/digest.sh@120 -- # common_target_config 00:29:05.121 02:03:50 -- host/digest.sh@43 -- # rpc_cmd 00:29:05.121 02:03:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:05.121 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:05.121 null0 00:29:05.121 [2024-04-15 02:03:50.763676] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.380 [2024-04-15 02:03:50.787911] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.380 02:03:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:05.380 02:03:50 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:29:05.380 02:03:50 -- host/digest.sh@77 -- # local rw bs qd 00:29:05.380 02:03:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:05.380 02:03:50 -- host/digest.sh@80 -- # rw=randread 00:29:05.380 02:03:50 -- host/digest.sh@80 -- # bs=4096 00:29:05.380 02:03:50 -- host/digest.sh@80 -- # qd=128 00:29:05.380 02:03:50 -- host/digest.sh@82 -- # bperfpid=2277930 00:29:05.380 02:03:50 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:05.380 02:03:50 -- host/digest.sh@83 -- # waitforlisten 2277930 /var/tmp/bperf.sock 00:29:05.380 02:03:50 -- common/autotest_common.sh@819 -- # '[' -z 2277930 ']' 00:29:05.380 02:03:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.380 02:03:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:05.380 02:03:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.380 02:03:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:05.380 02:03:50 -- common/autotest_common.sh@10 -- # set +x 00:29:05.380 [2024-04-15 02:03:50.831230] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:05.380 [2024-04-15 02:03:50.831292] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2277930 ] 00:29:05.380 EAL: No free 2048 kB hugepages reported on node 1 00:29:05.380 [2024-04-15 02:03:50.893395] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.380 [2024-04-15 02:03:50.983220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.638 02:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:05.638 02:03:51 -- common/autotest_common.sh@852 -- # return 0 00:29:05.638 02:03:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:05.638 02:03:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:05.638 02:03:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:05.896 02:03:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.896 02:03:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.465 nvme0n1 00:29:06.465 02:03:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:06.465 02:03:51 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.465 Running I/O for 2 seconds... 00:29:09.001 00:29:09.001 Latency(us) 00:29:09.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:09.001 nvme0n1 : 2.00 22236.20 86.86 0.00 0.00 5748.77 3046.21 16505.36 00:29:09.001 =================================================================================================================== 00:29:09.001 Total : 22236.20 86.86 0.00 0.00 5748.77 3046.21 16505.36 00:29:09.001 0 00:29:09.001 02:03:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:09.001 02:03:54 -- host/digest.sh@92 -- # get_accel_stats 00:29:09.001 02:03:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:09.001 02:03:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:09.001 02:03:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:09.001 | select(.opcode=="crc32c") 00:29:09.001 | "\(.module_name) \(.executed)"' 00:29:09.001 02:03:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:09.001 02:03:54 -- host/digest.sh@93 -- # exp_module=software 00:29:09.001 02:03:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:09.001 02:03:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:09.001 02:03:54 -- host/digest.sh@97 -- # killprocess 2277930 00:29:09.001 02:03:54 -- common/autotest_common.sh@926 -- # '[' -z 2277930 ']' 00:29:09.001 02:03:54 -- common/autotest_common.sh@930 -- # kill -0 2277930 00:29:09.001 02:03:54 -- common/autotest_common.sh@931 -- # uname 00:29:09.001 02:03:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:09.001 02:03:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2277930 00:29:09.001 02:03:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:09.001 02:03:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:09.001 02:03:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2277930' 00:29:09.001 killing process with pid 2277930 00:29:09.001 02:03:54 -- common/autotest_common.sh@945 -- # kill 2277930 00:29:09.001 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.001 00:29:09.001 Latency(us) 00:29:09.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.001 =================================================================================================================== 00:29:09.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.001 02:03:54 -- common/autotest_common.sh@950 -- # wait 2277930 00:29:09.001 02:03:54 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:29:09.001 02:03:54 -- host/digest.sh@77 -- # local rw bs qd 00:29:09.001 02:03:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:09.001 02:03:54 -- host/digest.sh@80 -- # rw=randread 00:29:09.001 02:03:54 -- host/digest.sh@80 -- # bs=131072 00:29:09.001 02:03:54 -- host/digest.sh@80 -- # qd=16 00:29:09.001 02:03:54 -- host/digest.sh@82 -- # bperfpid=2278352 00:29:09.001 02:03:54 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:09.001 02:03:54 -- host/digest.sh@83 -- # waitforlisten 2278352 /var/tmp/bperf.sock 00:29:09.001 02:03:54 -- common/autotest_common.sh@819 -- # '[' -z 2278352 ']' 00:29:09.001 02:03:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:09.001 02:03:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:09.001 02:03:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:09.001 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:09.001 02:03:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:09.002 02:03:54 -- common/autotest_common.sh@10 -- # set +x 00:29:09.002 [2024-04-15 02:03:54.572795] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:09.002 [2024-04-15 02:03:54.572867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278352 ] 00:29:09.002 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.002 Zero copy mechanism will not be used. 00:29:09.002 EAL: No free 2048 kB hugepages reported on node 1 00:29:09.002 [2024-04-15 02:03:54.634581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.260 [2024-04-15 02:03:54.723401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.260 02:03:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:09.260 02:03:54 -- common/autotest_common.sh@852 -- # return 0 00:29:09.260 02:03:54 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:09.260 02:03:54 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:09.260 02:03:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.518 02:03:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.518 02:03:55 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:10.084 nvme0n1 00:29:10.084 02:03:55 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:10.084 02:03:55 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.084 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:10.084 Zero copy mechanism will not be used. 00:29:10.084 Running I/O for 2 seconds... 00:29:12.616 00:29:12.616 Latency(us) 00:29:12.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.616 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:12.616 nvme0n1 : 2.01 1905.09 238.14 0.00 0.00 8394.71 8058.50 13592.65 00:29:12.616 =================================================================================================================== 00:29:12.616 Total : 1905.09 238.14 0.00 0.00 8394.71 8058.50 13592.65 00:29:12.616 0 00:29:12.616 02:03:57 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:12.616 02:03:57 -- host/digest.sh@92 -- # get_accel_stats 00:29:12.616 02:03:57 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:12.616 02:03:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:12.616 02:03:57 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:12.616 | select(.opcode=="crc32c") 00:29:12.616 | "\(.module_name) \(.executed)"' 00:29:12.616 02:03:57 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:12.616 02:03:57 -- host/digest.sh@93 -- # exp_module=software 00:29:12.616 02:03:57 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:12.616 02:03:57 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:12.616 02:03:57 -- host/digest.sh@97 -- # killprocess 2278352 00:29:12.616 02:03:57 -- common/autotest_common.sh@926 -- # '[' -z 2278352 ']' 00:29:12.616 02:03:57 -- common/autotest_common.sh@930 -- # kill -0 2278352 00:29:12.616 02:03:57 -- common/autotest_common.sh@931 -- # uname 00:29:12.616 02:03:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:12.616 02:03:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2278352 00:29:12.616 02:03:57 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:12.616 02:03:57 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:12.616 02:03:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2278352' 00:29:12.616 killing process with pid 2278352 00:29:12.616 02:03:57 -- common/autotest_common.sh@945 -- # kill 2278352 00:29:12.616 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.616 00:29:12.616 Latency(us) 00:29:12.616 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.616 =================================================================================================================== 00:29:12.616 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.616 02:03:57 -- common/autotest_common.sh@950 -- # wait 2278352 00:29:12.616 02:03:58 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:29:12.616 02:03:58 -- host/digest.sh@77 -- # local rw bs qd 00:29:12.616 02:03:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:12.616 02:03:58 -- host/digest.sh@80 -- # rw=randwrite 00:29:12.616 02:03:58 -- host/digest.sh@80 -- # bs=4096 00:29:12.616 02:03:58 -- host/digest.sh@80 -- # qd=128 00:29:12.616 02:03:58 -- host/digest.sh@82 -- # bperfpid=2278784 00:29:12.616 02:03:58 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:12.616 02:03:58 -- host/digest.sh@83 -- # waitforlisten 2278784 /var/tmp/bperf.sock 00:29:12.616 02:03:58 -- common/autotest_common.sh@819 -- # '[' -z 2278784 ']' 00:29:12.616 02:03:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.616 02:03:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:12.616 02:03:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.616 02:03:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:12.616 02:03:58 -- common/autotest_common.sh@10 -- # set +x 00:29:12.616 [2024-04-15 02:03:58.199226] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:12.616 [2024-04-15 02:03:58.199309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2278784 ] 00:29:12.616 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.875 [2024-04-15 02:03:58.266297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.875 [2024-04-15 02:03:58.355058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.875 02:03:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:12.875 02:03:58 -- common/autotest_common.sh@852 -- # return 0 00:29:12.875 02:03:58 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:12.875 02:03:58 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:12.875 02:03:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:13.133 02:03:58 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.133 02:03:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.699 nvme0n1 00:29:13.699 02:03:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:13.699 02:03:59 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.699 Running I/O for 2 seconds... 00:29:15.605 00:29:15.605 Latency(us) 00:29:15.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.605 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.605 nvme0n1 : 2.01 19098.78 74.60 0.00 0.00 6687.83 3495.25 13204.29 00:29:15.605 =================================================================================================================== 00:29:15.605 Total : 19098.78 74.60 0.00 0.00 6687.83 3495.25 13204.29 00:29:15.605 0 00:29:15.605 02:04:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:15.605 02:04:01 -- host/digest.sh@92 -- # get_accel_stats 00:29:15.605 02:04:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:15.605 02:04:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:15.605 02:04:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:15.605 | select(.opcode=="crc32c") 00:29:15.605 | "\(.module_name) \(.executed)"' 00:29:15.863 02:04:01 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:15.863 02:04:01 -- host/digest.sh@93 -- # exp_module=software 00:29:15.863 02:04:01 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:15.863 02:04:01 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:15.863 02:04:01 -- host/digest.sh@97 -- # killprocess 2278784 00:29:15.863 02:04:01 -- common/autotest_common.sh@926 -- # '[' -z 2278784 ']' 00:29:15.863 02:04:01 -- common/autotest_common.sh@930 -- # kill -0 2278784 00:29:15.863 02:04:01 -- common/autotest_common.sh@931 -- # uname 00:29:15.863 02:04:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:15.863 02:04:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2278784 00:29:15.863 02:04:01 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:15.863 02:04:01 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:15.863 02:04:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2278784' 00:29:15.863 killing process with pid 2278784 00:29:15.863 02:04:01 -- common/autotest_common.sh@945 -- # kill 2278784 00:29:15.863 Received shutdown signal, test time was about 2.000000 seconds 00:29:15.863 00:29:15.863 Latency(us) 00:29:15.863 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.863 =================================================================================================================== 00:29:15.863 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:15.863 02:04:01 -- common/autotest_common.sh@950 -- # wait 2278784 00:29:16.121 02:04:01 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:29:16.121 02:04:01 -- host/digest.sh@77 -- # local rw bs qd 00:29:16.121 02:04:01 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:16.121 02:04:01 -- host/digest.sh@80 -- # rw=randwrite 00:29:16.121 02:04:01 -- host/digest.sh@80 -- # bs=131072 00:29:16.121 02:04:01 -- host/digest.sh@80 -- # qd=16 00:29:16.121 02:04:01 -- host/digest.sh@82 -- # bperfpid=2279309 00:29:16.121 02:04:01 -- host/digest.sh@83 -- # waitforlisten 2279309 /var/tmp/bperf.sock 00:29:16.121 02:04:01 -- host/digest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:16.121 02:04:01 -- common/autotest_common.sh@819 -- # '[' -z 2279309 ']' 00:29:16.121 02:04:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.121 02:04:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:16.121 02:04:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.121 02:04:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:16.121 02:04:01 -- common/autotest_common.sh@10 -- # set +x 00:29:16.121 [2024-04-15 02:04:01.708401] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:16.121 [2024-04-15 02:04:01.708489] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279309 ] 00:29:16.121 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.121 Zero copy mechanism will not be used. 00:29:16.121 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.380 [2024-04-15 02:04:01.771926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.380 [2024-04-15 02:04:01.858394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.380 02:04:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:16.380 02:04:01 -- common/autotest_common.sh@852 -- # return 0 00:29:16.380 02:04:01 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:29:16.380 02:04:01 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:29:16.380 02:04:01 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:16.638 02:04:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.638 02:04:02 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.206 nvme0n1 00:29:17.206 02:04:02 -- host/digest.sh@91 -- # bperf_py perform_tests 00:29:17.206 02:04:02 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.206 Zero copy mechanism will not be used. 00:29:17.206 Running I/O for 2 seconds... 00:29:19.740 00:29:19.740 Latency(us) 00:29:19.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.740 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:19.740 nvme0n1 : 2.02 971.69 121.46 0.00 0.00 16389.32 4271.98 19418.07 00:29:19.740 =================================================================================================================== 00:29:19.740 Total : 971.69 121.46 0.00 0.00 16389.32 4271.98 19418.07 00:29:19.740 0 00:29:19.740 02:04:04 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:29:19.740 02:04:04 -- host/digest.sh@92 -- # get_accel_stats 00:29:19.740 02:04:04 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:19.740 02:04:04 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:19.740 02:04:04 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:19.740 | select(.opcode=="crc32c") 00:29:19.740 | "\(.module_name) \(.executed)"' 00:29:19.740 02:04:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:29:19.740 02:04:05 -- host/digest.sh@93 -- # exp_module=software 00:29:19.740 02:04:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:29:19.740 02:04:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:19.740 02:04:05 -- host/digest.sh@97 -- # killprocess 2279309 00:29:19.740 02:04:05 -- common/autotest_common.sh@926 -- # '[' -z 2279309 ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@930 -- # kill -0 2279309 00:29:19.740 02:04:05 -- common/autotest_common.sh@931 -- # uname 00:29:19.740 02:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2279309 00:29:19.740 02:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:19.740 02:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2279309' 00:29:19.740 killing process with pid 2279309 00:29:19.740 02:04:05 -- common/autotest_common.sh@945 -- # kill 2279309 00:29:19.740 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.740 00:29:19.740 Latency(us) 00:29:19.740 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.740 =================================================================================================================== 00:29:19.740 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.740 02:04:05 -- common/autotest_common.sh@950 -- # wait 2279309 00:29:19.740 02:04:05 -- host/digest.sh@126 -- # killprocess 2277905 00:29:19.740 02:04:05 -- common/autotest_common.sh@926 -- # '[' -z 2277905 ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@930 -- # kill -0 2277905 00:29:19.740 02:04:05 -- common/autotest_common.sh@931 -- # uname 00:29:19.740 02:04:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2277905 00:29:19.740 02:04:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:19.740 02:04:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:19.740 02:04:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2277905' 00:29:19.740 killing process with pid 2277905 00:29:19.740 02:04:05 -- common/autotest_common.sh@945 -- # kill 2277905 00:29:19.740 02:04:05 -- common/autotest_common.sh@950 -- # wait 2277905 00:29:19.998 00:29:19.998 real 0m15.177s 00:29:19.998 user 0m30.487s 00:29:19.998 sys 0m3.773s 00:29:19.998 02:04:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:19.998 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:19.998 ************************************ 00:29:19.998 END TEST nvmf_digest_clean 00:29:19.998 ************************************ 00:29:19.998 02:04:05 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:29:19.998 02:04:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:29:19.998 02:04:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:19.998 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:19.998 ************************************ 00:29:19.998 START TEST nvmf_digest_error 00:29:19.998 ************************************ 00:29:19.998 02:04:05 -- common/autotest_common.sh@1104 -- # run_digest_error 00:29:19.998 02:04:05 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:29:19.998 02:04:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:19.998 02:04:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:19.998 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:19.998 02:04:05 -- nvmf/common.sh@469 -- # nvmfpid=2279761 00:29:19.998 02:04:05 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:19.998 02:04:05 -- nvmf/common.sh@470 -- # waitforlisten 2279761 00:29:19.998 02:04:05 -- common/autotest_common.sh@819 -- # '[' -z 2279761 ']' 00:29:19.998 02:04:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:19.998 02:04:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:19.998 02:04:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:19.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:19.998 02:04:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:19.998 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:19.998 [2024-04-15 02:04:05.629747] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:19.998 [2024-04-15 02:04:05.629840] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:20.257 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.257 [2024-04-15 02:04:05.699807] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.257 [2024-04-15 02:04:05.784734] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:20.257 [2024-04-15 02:04:05.784901] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:20.257 [2024-04-15 02:04:05.784922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:20.257 [2024-04-15 02:04:05.784936] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:20.257 [2024-04-15 02:04:05.784974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:20.257 02:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:20.257 02:04:05 -- common/autotest_common.sh@852 -- # return 0 00:29:20.257 02:04:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:20.257 02:04:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:20.257 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:20.257 02:04:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:20.257 02:04:05 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:20.257 02:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.257 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:20.257 [2024-04-15 02:04:05.857569] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:20.257 02:04:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.257 02:04:05 -- host/digest.sh@104 -- # common_target_config 00:29:20.257 02:04:05 -- host/digest.sh@43 -- # rpc_cmd 00:29:20.257 02:04:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:20.257 02:04:05 -- common/autotest_common.sh@10 -- # set +x 00:29:20.516 null0 00:29:20.516 [2024-04-15 02:04:05.976465] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:20.516 [2024-04-15 02:04:06.000676] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:20.516 02:04:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:20.516 02:04:06 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:29:20.516 02:04:06 -- host/digest.sh@54 -- # local rw bs qd 00:29:20.516 02:04:06 -- host/digest.sh@56 -- # rw=randread 00:29:20.516 02:04:06 -- host/digest.sh@56 -- # bs=4096 00:29:20.516 02:04:06 -- host/digest.sh@56 -- # qd=128 00:29:20.516 02:04:06 -- host/digest.sh@58 -- # bperfpid=2279780 00:29:20.516 02:04:06 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:20.516 02:04:06 -- host/digest.sh@60 -- # waitforlisten 2279780 /var/tmp/bperf.sock 00:29:20.516 02:04:06 -- common/autotest_common.sh@819 -- # '[' -z 2279780 ']' 00:29:20.516 02:04:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.516 02:04:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:20.516 02:04:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.516 02:04:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:20.516 02:04:06 -- common/autotest_common.sh@10 -- # set +x 00:29:20.516 [2024-04-15 02:04:06.044535] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:20.516 [2024-04-15 02:04:06.044597] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2279780 ] 00:29:20.516 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.516 [2024-04-15 02:04:06.108399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.808 [2024-04-15 02:04:06.199192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.397 02:04:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:21.397 02:04:07 -- common/autotest_common.sh@852 -- # return 0 00:29:21.397 02:04:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.397 02:04:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:21.655 02:04:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:21.655 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:21.655 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:21.655 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:21.655 02:04:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.655 02:04:07 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:22.224 nvme0n1 00:29:22.224 02:04:07 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:22.224 02:04:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:22.225 02:04:07 -- common/autotest_common.sh@10 -- # set +x 00:29:22.225 02:04:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:22.225 02:04:07 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:22.225 02:04:07 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:22.225 Running I/O for 2 seconds... 00:29:22.225 [2024-04-15 02:04:07.751059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.751123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.751142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.764263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.764311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.764330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.775455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.775493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.775511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.788356] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.788388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.788405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.800215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.800246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.800280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.812639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.812670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:23627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.812687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.824330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.824375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.824392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.835853] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.835882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.835899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.847430] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.847460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:1555 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.847491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.859634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.859664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:14685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.859681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.225 [2024-04-15 02:04:07.871378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.225 [2024-04-15 02:04:07.871424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.225 [2024-04-15 02:04:07.871440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.882867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.882897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.882914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.894509] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.894539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.894555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.906840] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.906870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.906902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.918456] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.918486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.918502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.929864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.929894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.929910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.484 [2024-04-15 02:04:07.941636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.484 [2024-04-15 02:04:07.941666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.484 [2024-04-15 02:04:07.941698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:07.953826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:07.953856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:07.953888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:07.965281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:07.965311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:07.965327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:07.976834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:07.976864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:07.976885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:07.988450] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:07.988481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:07.988512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.000737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.000767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.000783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.012404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.012433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.012450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.023701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.023732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.023748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.036249] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.036280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.036297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.047886] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.047916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.047933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.059349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.059379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.059395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.070944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.070974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.070990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.083528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.083563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.083580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.095276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.095305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.095321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.106750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.106779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.106795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.118586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.118615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.118632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.485 [2024-04-15 02:04:08.131114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.485 [2024-04-15 02:04:08.131144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.485 [2024-04-15 02:04:08.131161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.142993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.143023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.143040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.154491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.154520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.154537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.166181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.166226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.178464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.178495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.178512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.190158] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.190187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.190203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.201672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.201702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.201719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.213305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.213336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.213353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.225464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.225494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.225511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.744 [2024-04-15 02:04:08.237274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.744 [2024-04-15 02:04:08.237305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.744 [2024-04-15 02:04:08.237321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.248864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.248907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:8751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.248923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.261229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.261273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.261289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.272784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.272813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.272829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.284337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.284371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:7156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.284388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.296068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.296101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:7515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.296119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.308578] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.308607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24833 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.308626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.320194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.320223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:15698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.320239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.331845] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.331873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8091 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.331891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.344029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.344074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.344092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.355882] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.355911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:16144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.355930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.367311] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.367340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.367356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:22.745 [2024-04-15 02:04:08.379441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:22.745 [2024-04-15 02:04:08.379471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:22762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:22.745 [2024-04-15 02:04:08.379512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.391749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.391779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:11637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.391798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.403705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.403734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.403761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.415674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.415715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:25236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.415731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.427646] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.427676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16317 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.427693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.440161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.440192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.440210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.452149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.452179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.452196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.463650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.463678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.463696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.475653] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.475683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.475702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.487919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.487949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.487977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.499751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.499781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.499797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.511454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.511483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.511502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.523200] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.523245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.523262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.535508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.535553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.535572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.547207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.547236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:9108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.547253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.558590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.558619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.558638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.571157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.571188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.571211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.582883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.582912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.582930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.594361] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.594394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.594411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.605986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.606015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.004 [2024-04-15 02:04:08.606052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.004 [2024-04-15 02:04:08.618293] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.004 [2024-04-15 02:04:08.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2038 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.005 [2024-04-15 02:04:08.618340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.005 [2024-04-15 02:04:08.630025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.005 [2024-04-15 02:04:08.630062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:9065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.005 [2024-04-15 02:04:08.630080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.005 [2024-04-15 02:04:08.641423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.005 [2024-04-15 02:04:08.641451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:10994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.005 [2024-04-15 02:04:08.641470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.653499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.653531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.653553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.665704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.665733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.665757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.677302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.677331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.677352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.689705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.689735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.689755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.701277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.701306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:14118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.701328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.712965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.712994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.713011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.725188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.725218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.725250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.737499] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.737530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.737549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.749082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.749112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:24581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.749132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.760473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.760503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.760524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.772159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.772207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.772225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.784542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.784573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:13035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.784608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.796266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.796302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.796340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.807670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6748 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.807719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.819253] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.819283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:3835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.819302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.831541] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.831571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:19727 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.831591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.843171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.843200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:2248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.843218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.854650] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.854681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.854700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.867197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.867228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.867247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.878893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.878938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.878958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.890553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.890582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.264 [2024-04-15 02:04:08.890601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.264 [2024-04-15 02:04:08.902080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.264 [2024-04-15 02:04:08.902109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:3190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.265 [2024-04-15 02:04:08.902127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.525 [2024-04-15 02:04:08.914586] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.525 [2024-04-15 02:04:08.914617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.525 [2024-04-15 02:04:08.914649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.525 [2024-04-15 02:04:08.926102] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.525 [2024-04-15 02:04:08.926131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:8519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.525 [2024-04-15 02:04:08.926148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.525 [2024-04-15 02:04:08.937640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.525 [2024-04-15 02:04:08.937670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.525 [2024-04-15 02:04:08.937686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.525 [2024-04-15 02:04:08.950297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.525 [2024-04-15 02:04:08.950327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14852 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.525 [2024-04-15 02:04:08.950343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.525 [2024-04-15 02:04:08.961826] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.525 [2024-04-15 02:04:08.961856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:08.961873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:08.973461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:08.973490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:08.973506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:08.984977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:08.985006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:08.985022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:08.997380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:08.997410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:08.997433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.009029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.009067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.009085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.020461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.020491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:2454 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.020507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.032243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.032273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.032291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.044958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.044988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.045005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.056478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.056509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:378 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.056539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.068064] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.068110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.068127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.079756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.079787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.079804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.092177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.092209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.092227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.103753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.103788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.103805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.115151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.115180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.115197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.127054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.127083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.127100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.139210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.139240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:14028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.139257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.150835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.150865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.150881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.526 [2024-04-15 02:04:09.162729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.526 [2024-04-15 02:04:09.162759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.526 [2024-04-15 02:04:09.162776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.787 [2024-04-15 02:04:09.175004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.787 [2024-04-15 02:04:09.175036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.787 [2024-04-15 02:04:09.175065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.787 [2024-04-15 02:04:09.186867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.787 [2024-04-15 02:04:09.186898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:18735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.787 [2024-04-15 02:04:09.186915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.787 [2024-04-15 02:04:09.198640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.787 [2024-04-15 02:04:09.198684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.787 [2024-04-15 02:04:09.198701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.787 [2024-04-15 02:04:09.210137] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.787 [2024-04-15 02:04:09.210168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.787 [2024-04-15 02:04:09.210184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.787 [2024-04-15 02:04:09.222893] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.787 [2024-04-15 02:04:09.222923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.222940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.234583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.234613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:7313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.234629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.246366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.246397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.246428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.258468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.258500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.258518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.270077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.270109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.270127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.281664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.281693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.281710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.293273] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.293303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.293320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.305804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.305836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20573 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.305859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.317432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.317462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.317478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.329074] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.329114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.329130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.341391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.341422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.341439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.353122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.353152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.353168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.364624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.364653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.364669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.376496] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.376527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:16730 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.376543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.388776] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.388807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:24575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.388823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.400395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.400425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.400440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.411994] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.412025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.412042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:23.788 [2024-04-15 02:04:09.424215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:23.788 [2024-04-15 02:04:09.424245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:23.788 [2024-04-15 02:04:09.424262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.049 [2024-04-15 02:04:09.436006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.436037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.436063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.447661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.447707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.447724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.459426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.459457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.459473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.471612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.471644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:11572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.471660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.483125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.483170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.483187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.494944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.494975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.494992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.507400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.507429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.507451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.518947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.518976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:11321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.518993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.530570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.530599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.530616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.543114] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.543145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.543162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.554595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.554626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.554643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.566093] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.566122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:23348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.566138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.578389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.578420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:13660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.578437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.590211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.590240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.590256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.601562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.601592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.601608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.613294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.613328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.613346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.625890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.625922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:4714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.625940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.637162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.637192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.637208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.648821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.648851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:9030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.648882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.661118] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.661150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.661166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.672609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.672639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:15929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.672655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.050 [2024-04-15 02:04:09.684529] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.050 [2024-04-15 02:04:09.684559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7126 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.050 [2024-04-15 02:04:09.684576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.309 [2024-04-15 02:04:09.697157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.309 [2024-04-15 02:04:09.697189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.309 [2024-04-15 02:04:09.697205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.309 [2024-04-15 02:04:09.708785] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.309 [2024-04-15 02:04:09.708815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24735 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.310 [2024-04-15 02:04:09.708831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.310 [2024-04-15 02:04:09.720272] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.310 [2024-04-15 02:04:09.720301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14414 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.310 [2024-04-15 02:04:09.720317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.310 [2024-04-15 02:04:09.731475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1c91f10) 00:29:24.310 [2024-04-15 02:04:09.731505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:24.310 [2024-04-15 02:04:09.731522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:24.310 00:29:24.310 Latency(us) 00:29:24.310 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.310 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:24.310 nvme0n1 : 2.00 21407.53 83.62 0.00 0.00 5971.28 3131.16 18835.53 00:29:24.310 =================================================================================================================== 00:29:24.310 Total : 21407.53 83.62 0.00 0.00 5971.28 3131.16 18835.53 00:29:24.310 0 00:29:24.310 02:04:09 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:24.310 02:04:09 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:24.310 02:04:09 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:24.310 | .driver_specific 00:29:24.310 | .nvme_error 00:29:24.310 | .status_code 00:29:24.310 | .command_transient_transport_error' 00:29:24.310 02:04:09 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:24.568 02:04:09 -- host/digest.sh@71 -- # (( 168 > 0 )) 00:29:24.568 02:04:09 -- host/digest.sh@73 -- # killprocess 2279780 00:29:24.568 02:04:09 -- common/autotest_common.sh@926 -- # '[' -z 2279780 ']' 00:29:24.568 02:04:09 -- common/autotest_common.sh@930 -- # kill -0 2279780 00:29:24.568 02:04:09 -- common/autotest_common.sh@931 -- # uname 00:29:24.568 02:04:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:24.568 02:04:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2279780 00:29:24.568 02:04:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:24.568 02:04:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:24.568 02:04:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2279780' 00:29:24.568 killing process with pid 2279780 00:29:24.568 02:04:10 -- common/autotest_common.sh@945 -- # kill 2279780 00:29:24.568 Received shutdown signal, test time was about 2.000000 seconds 00:29:24.568 00:29:24.568 Latency(us) 00:29:24.568 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:24.568 =================================================================================================================== 00:29:24.568 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:24.568 02:04:10 -- common/autotest_common.sh@950 -- # wait 2279780 00:29:24.826 02:04:10 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:29:24.826 02:04:10 -- host/digest.sh@54 -- # local rw bs qd 00:29:24.826 02:04:10 -- host/digest.sh@56 -- # rw=randread 00:29:24.826 02:04:10 -- host/digest.sh@56 -- # bs=131072 00:29:24.826 02:04:10 -- host/digest.sh@56 -- # qd=16 00:29:24.827 02:04:10 -- host/digest.sh@58 -- # bperfpid=2280336 00:29:24.827 02:04:10 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:24.827 02:04:10 -- host/digest.sh@60 -- # waitforlisten 2280336 /var/tmp/bperf.sock 00:29:24.827 02:04:10 -- common/autotest_common.sh@819 -- # '[' -z 2280336 ']' 00:29:24.827 02:04:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.827 02:04:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:24.827 02:04:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.827 02:04:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:24.827 02:04:10 -- common/autotest_common.sh@10 -- # set +x 00:29:24.827 [2024-04-15 02:04:10.267987] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:24.827 [2024-04-15 02:04:10.268088] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280336 ] 00:29:24.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:24.827 Zero copy mechanism will not be used. 00:29:24.827 EAL: No free 2048 kB hugepages reported on node 1 00:29:24.827 [2024-04-15 02:04:10.328618] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.827 [2024-04-15 02:04:10.412704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.761 02:04:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:25.761 02:04:11 -- common/autotest_common.sh@852 -- # return 0 00:29:25.761 02:04:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:25.761 02:04:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:26.019 02:04:11 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:26.019 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.019 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:29:26.019 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.019 02:04:11 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.019 02:04:11 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:26.277 nvme0n1 00:29:26.277 02:04:11 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:26.277 02:04:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:26.277 02:04:11 -- common/autotest_common.sh@10 -- # set +x 00:29:26.277 02:04:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:26.277 02:04:11 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:26.277 02:04:11 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:26.535 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:26.535 Zero copy mechanism will not be used. 00:29:26.535 Running I/O for 2 seconds... 00:29:26.535 [2024-04-15 02:04:11.984354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:11.984428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:11.984447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.001115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.001163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.001182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.018198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.018231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.018256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.035036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.035073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.035092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.052139] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.052171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.052189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.069053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.069084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.069101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.085622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.085653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.085670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.102531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.102562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.102579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.118954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.118985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.119002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.535 [2024-04-15 02:04:12.135380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.535 [2024-04-15 02:04:12.135410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.535 [2024-04-15 02:04:12.135427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.536 [2024-04-15 02:04:12.151711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.536 [2024-04-15 02:04:12.151742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-04-15 02:04:12.151759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.536 [2024-04-15 02:04:12.168150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.536 [2024-04-15 02:04:12.168188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.536 [2024-04-15 02:04:12.168207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.184755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.184786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.184804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.201566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.201597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.201615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.218305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.218351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.218368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.235094] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.235125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.235143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.251852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.251884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.251916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.268884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.268914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.268932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.285981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.286013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.286031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.302338] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.302369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.302394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.318935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.318966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.318983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.335277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.335321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.335339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.351978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.352024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.352042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.368596] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.368643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.368661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.385611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.385642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.385659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.402023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.402078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.402097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.418407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.418436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.418453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:26.796 [2024-04-15 02:04:12.434734] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:26.796 [2024-04-15 02:04:12.434780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:26.796 [2024-04-15 02:04:12.434798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.451181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.451220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.451239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.467475] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.467506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.467524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.483890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.483921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.483937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.500642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.500672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.500689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.517086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.517117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.517135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.533448] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.533494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.533511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.549932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.549962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.549979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.566290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.566321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.566354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.582755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.582785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.582802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.599225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.599256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.599274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.057 [2024-04-15 02:04:12.615839] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.057 [2024-04-15 02:04:12.615869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.057 [2024-04-15 02:04:12.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.058 [2024-04-15 02:04:12.632276] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.058 [2024-04-15 02:04:12.632306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.058 [2024-04-15 02:04:12.632324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.058 [2024-04-15 02:04:12.648690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.058 [2024-04-15 02:04:12.648719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.058 [2024-04-15 02:04:12.648736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.058 [2024-04-15 02:04:12.665220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.058 [2024-04-15 02:04:12.665250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.058 [2024-04-15 02:04:12.665267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.058 [2024-04-15 02:04:12.681556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.058 [2024-04-15 02:04:12.681584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.058 [2024-04-15 02:04:12.681601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.058 [2024-04-15 02:04:12.698019] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.058 [2024-04-15 02:04:12.698069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.058 [2024-04-15 02:04:12.698087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.714768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.714813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.714831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.731317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.731363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.731387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.748722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.748757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.748776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.766394] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.766422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.766438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.783897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.783932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.783951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.801621] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.801654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.801674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.819015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.819055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.819091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.836523] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.836557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.836576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.853793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.853828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.853848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.871349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.871399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.871419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.888729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.888770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.888791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.906210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.906240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.906258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.923556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.923590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.923609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.941081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.941111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.941128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.318 [2024-04-15 02:04:12.958495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.318 [2024-04-15 02:04:12.958530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.318 [2024-04-15 02:04:12.958551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:12.976667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:12.976701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:12.976720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:12.994568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:12.994603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:12.994621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.012629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.012664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.012683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.030437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.030472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.030491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.048373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.048406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.048426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.066309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.066355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.084445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.084478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.084498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.102903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.102937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.102956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.120997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.121030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.121056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.138868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.138901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.138920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.156590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.156623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.578 [2024-04-15 02:04:13.156643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.578 [2024-04-15 02:04:13.174328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.578 [2024-04-15 02:04:13.174371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.579 [2024-04-15 02:04:13.174388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.579 [2024-04-15 02:04:13.192358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.579 [2024-04-15 02:04:13.192406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.579 [2024-04-15 02:04:13.192431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.579 [2024-04-15 02:04:13.210326] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.579 [2024-04-15 02:04:13.210370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.579 [2024-04-15 02:04:13.210387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.228195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.228242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.228259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.246479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.246513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.246532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.264714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.264748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.264767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.282784] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.282818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.282837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.300844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.300878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.300897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.318975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.319009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.839 [2024-04-15 02:04:13.319029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.839 [2024-04-15 02:04:13.337085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.839 [2024-04-15 02:04:13.337130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.337148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.354775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.354825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.354845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.372525] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.372560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.372579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.389918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.389952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.389971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.407217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.407262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.407279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.424652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.424685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.424705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.441988] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.442021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.442040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.459342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.459389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.459408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:27.840 [2024-04-15 02:04:13.476848] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:27.840 [2024-04-15 02:04:13.476882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:27.840 [2024-04-15 02:04:13.476901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.494240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.494270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.494292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.511555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.511589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.511609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.528795] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.528829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.528848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.546113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.546143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.546160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.563391] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.563420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.563436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.580749] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.099 [2024-04-15 02:04:13.580783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.099 [2024-04-15 02:04:13.580802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.099 [2024-04-15 02:04:13.598070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.598118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.598136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.615538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.615572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.615591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.632938] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.632972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.632991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.650297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.650344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.650361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.667706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.667740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.667760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.685130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.685162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.685179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.702406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.702435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.702466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.719876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.719910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.719929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.100 [2024-04-15 02:04:13.737600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.100 [2024-04-15 02:04:13.737635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.100 [2024-04-15 02:04:13.737654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.754750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.754785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.754805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.772327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.772372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.772393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.789910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.789944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.789963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.807591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.807624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.807643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.825554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.825587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.843312] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.843356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.843373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.861017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.861060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.861081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.878716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.878750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.878769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.896449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.896482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.896501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.914321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.914351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.914369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.932539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.932571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.932590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.950478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.950512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.950537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:28.359 [2024-04-15 02:04:13.968767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xbe4de0) 00:29:28.359 [2024-04-15 02:04:13.968800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.359 [2024-04-15 02:04:13.968820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.359 00:29:28.359 Latency(us) 00:29:28.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.359 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:28.359 nvme0n1 : 2.01 1797.96 224.74 0.00 0.00 8895.25 8058.50 18447.17 00:29:28.359 =================================================================================================================== 00:29:28.359 Total : 1797.96 224.74 0.00 0.00 8895.25 8058.50 18447.17 00:29:28.359 0 00:29:28.359 02:04:13 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:28.359 02:04:13 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:28.359 02:04:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:28.359 02:04:13 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:28.359 | .driver_specific 00:29:28.359 | .nvme_error 00:29:28.359 | .status_code 00:29:28.359 | .command_transient_transport_error' 00:29:28.618 02:04:14 -- host/digest.sh@71 -- # (( 116 > 0 )) 00:29:28.618 02:04:14 -- host/digest.sh@73 -- # killprocess 2280336 00:29:28.618 02:04:14 -- common/autotest_common.sh@926 -- # '[' -z 2280336 ']' 00:29:28.618 02:04:14 -- common/autotest_common.sh@930 -- # kill -0 2280336 00:29:28.618 02:04:14 -- common/autotest_common.sh@931 -- # uname 00:29:28.618 02:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:28.618 02:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2280336 00:29:28.877 02:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:28.877 02:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:28.877 02:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2280336' 00:29:28.877 killing process with pid 2280336 00:29:28.877 02:04:14 -- common/autotest_common.sh@945 -- # kill 2280336 00:29:28.877 Received shutdown signal, test time was about 2.000000 seconds 00:29:28.877 00:29:28.877 Latency(us) 00:29:28.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.877 =================================================================================================================== 00:29:28.877 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.877 02:04:14 -- common/autotest_common.sh@950 -- # wait 2280336 00:29:28.877 02:04:14 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:29:28.877 02:04:14 -- host/digest.sh@54 -- # local rw bs qd 00:29:28.877 02:04:14 -- host/digest.sh@56 -- # rw=randwrite 00:29:28.877 02:04:14 -- host/digest.sh@56 -- # bs=4096 00:29:28.877 02:04:14 -- host/digest.sh@56 -- # qd=128 00:29:28.877 02:04:14 -- host/digest.sh@58 -- # bperfpid=2280889 00:29:28.877 02:04:14 -- host/digest.sh@60 -- # waitforlisten 2280889 /var/tmp/bperf.sock 00:29:28.877 02:04:14 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:28.877 02:04:14 -- common/autotest_common.sh@819 -- # '[' -z 2280889 ']' 00:29:28.877 02:04:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:28.877 02:04:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:28.877 02:04:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:28.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:28.877 02:04:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:28.877 02:04:14 -- common/autotest_common.sh@10 -- # set +x 00:29:28.877 [2024-04-15 02:04:14.517974] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:28.877 [2024-04-15 02:04:14.518079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2280889 ] 00:29:29.137 EAL: No free 2048 kB hugepages reported on node 1 00:29:29.137 [2024-04-15 02:04:14.579384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:29.137 [2024-04-15 02:04:14.665716] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:30.074 02:04:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:30.074 02:04:15 -- common/autotest_common.sh@852 -- # return 0 00:29:30.074 02:04:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.074 02:04:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:30.333 02:04:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:30.333 02:04:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.333 02:04:15 -- common/autotest_common.sh@10 -- # set +x 00:29:30.333 02:04:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.333 02:04:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.333 02:04:15 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:30.591 nvme0n1 00:29:30.591 02:04:16 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:30.591 02:04:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:30.591 02:04:16 -- common/autotest_common.sh@10 -- # set +x 00:29:30.591 02:04:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:30.591 02:04:16 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:30.591 02:04:16 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:30.591 Running I/O for 2 seconds... 00:29:30.591 [2024-04-15 02:04:16.186471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190ed920 00:29:30.591 [2024-04-15 02:04:16.187417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.591 [2024-04-15 02:04:16.187456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:30.591 [2024-04-15 02:04:16.199346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190fb048 00:29:30.591 [2024-04-15 02:04:16.200125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.591 [2024-04-15 02:04:16.200153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:30.591 [2024-04-15 02:04:16.212162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.591 [2024-04-15 02:04:16.213369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.591 [2024-04-15 02:04:16.213400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:30.591 [2024-04-15 02:04:16.224629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.591 [2024-04-15 02:04:16.225867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:13811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.591 [2024-04-15 02:04:16.225910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:30.591 [2024-04-15 02:04:16.237205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.591 [2024-04-15 02:04:16.238447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:15026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.591 [2024-04-15 02:04:16.238479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.249764] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.852 [2024-04-15 02:04:16.250987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.251020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.262068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.852 [2024-04-15 02:04:16.263339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.263368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.274542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.852 [2024-04-15 02:04:16.275831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.275864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.286995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.852 [2024-04-15 02:04:16.288305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:18133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.288348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.299389] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3a28 00:29:30.852 [2024-04-15 02:04:16.300709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.300742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.311836] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3e60 00:29:30.852 [2024-04-15 02:04:16.313181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:3014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.313211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.324224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3e60 00:29:30.852 [2024-04-15 02:04:16.325516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.325561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.336605] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f3e60 00:29:30.852 [2024-04-15 02:04:16.337925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:7529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.337957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.349167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.852 [2024-04-15 02:04:16.350479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:20151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.350511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.361594] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.852 [2024-04-15 02:04:16.362916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.362948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.373978] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.852 [2024-04-15 02:04:16.375301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.375344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.386281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.852 [2024-04-15 02:04:16.387689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.387723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:30.852 [2024-04-15 02:04:16.398621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.852 [2024-04-15 02:04:16.400019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.852 [2024-04-15 02:04:16.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.411024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.853 [2024-04-15 02:04:16.412491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:12827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.412524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.423488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.853 [2024-04-15 02:04:16.424899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.424930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.435900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.853 [2024-04-15 02:04:16.437336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.437372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.448157] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.853 [2024-04-15 02:04:16.449581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.449613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.460569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:30.853 [2024-04-15 02:04:16.462015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:2148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.462054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.472902] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f4f40 00:29:30.853 [2024-04-15 02:04:16.474304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5222 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.474332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.485288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f35f0 00:29:30.853 [2024-04-15 02:04:16.486789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.853 [2024-04-15 02:04:16.486821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:30.853 [2024-04-15 02:04:16.497804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e88f8 00:29:31.112 [2024-04-15 02:04:16.499313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.112 [2024-04-15 02:04:16.499357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:31.112 [2024-04-15 02:04:16.510287] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f0ff8 00:29:31.112 [2024-04-15 02:04:16.511816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.112 [2024-04-15 02:04:16.511849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:31.112 [2024-04-15 02:04:16.522678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e99d8 00:29:31.112 [2024-04-15 02:04:16.524177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.524205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.535038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9e10 00:29:31.113 [2024-04-15 02:04:16.536559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.536592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.547514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9e10 00:29:31.113 [2024-04-15 02:04:16.549056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.549106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.559784] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e99d8 00:29:31.113 [2024-04-15 02:04:16.561361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:15155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.561390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.572038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f0ff8 00:29:31.113 [2024-04-15 02:04:16.573614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.573647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.584410] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e88f8 00:29:31.113 [2024-04-15 02:04:16.586005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.586038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.596698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f35f0 00:29:31.113 [2024-04-15 02:04:16.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.598299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.608923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f4f40 00:29:31.113 [2024-04-15 02:04:16.610242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:15804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.610269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.621355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f46d0 00:29:31.113 [2024-04-15 02:04:16.623274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.623303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.633715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f7100 00:29:31.113 [2024-04-15 02:04:16.635696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:9266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.635728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.646008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f5378 00:29:31.113 [2024-04-15 02:04:16.648032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.648073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.658262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f1868 00:29:31.113 [2024-04-15 02:04:16.659991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.660024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.670873] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f6890 00:29:31.113 [2024-04-15 02:04:16.672613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.672645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.681995] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e88f8 00:29:31.113 [2024-04-15 02:04:16.683018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.683058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.694245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f92c0 00:29:31.113 [2024-04-15 02:04:16.695314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.695344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.706371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e7818 00:29:31.113 [2024-04-15 02:04:16.707511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.707555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.718679] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f4f40 00:29:31.113 [2024-04-15 02:04:16.719809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.719842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.730807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f5be8 00:29:31.113 [2024-04-15 02:04:16.731952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:21927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.731986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.743563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f7538 00:29:31.113 [2024-04-15 02:04:16.744600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.744633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:31.113 [2024-04-15 02:04:16.755695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f4f40 00:29:31.113 [2024-04-15 02:04:16.756679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17311 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.113 [2024-04-15 02:04:16.756711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.767776] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190fb480 00:29:31.374 [2024-04-15 02:04:16.768906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:22835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.768939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.781999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190fcdd0 00:29:31.374 [2024-04-15 02:04:16.783875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.783908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.794382] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f35f0 00:29:31.374 [2024-04-15 02:04:16.796332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.796359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.807007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e8088 00:29:31.374 [2024-04-15 02:04:16.808802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.808836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.819377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190feb58 00:29:31.374 [2024-04-15 02:04:16.821209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:23845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.821236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.831646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190ea248 00:29:31.374 [2024-04-15 02:04:16.833487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.833515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.843488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f9f68 00:29:31.374 [2024-04-15 02:04:16.845404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.845431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.855365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f92c0 00:29:31.374 [2024-04-15 02:04:16.857233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.857261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.867294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190fb048 00:29:31.374 [2024-04-15 02:04:16.869170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.869204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.877868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190f7970 00:29:31.374 [2024-04-15 02:04:16.879451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.879496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.889768] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190ef6a8 00:29:31.374 [2024-04-15 02:04:16.891662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.891693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.902236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.902561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.902593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.915248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.915581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13133 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.915612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.928091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.928418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.928450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.941334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.941688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.941720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.954850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.955195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.955242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.968363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.968722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.968757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.981907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.982262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:8780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.982306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:16.995453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:16.995820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.374 [2024-04-15 02:04:16.995853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.374 [2024-04-15 02:04:17.008862] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.374 [2024-04-15 02:04:17.009219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24875 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.375 [2024-04-15 02:04:17.009262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.022293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.022636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.022670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.035733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.036082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.036128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.049202] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.049540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.049572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.062647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.062986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.063018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.076194] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.076529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.076561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.089574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.089909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:15550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.089941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.102937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.103278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:14617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.103306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.116457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.116791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.116823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.635 [2024-04-15 02:04:17.129893] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.635 [2024-04-15 02:04:17.130237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:24472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.635 [2024-04-15 02:04:17.130280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.143430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.143782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.143814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.156807] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.157158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.157202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.170302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.170658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.170690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.183687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.184062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:25226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.184108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.197133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.197472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.197503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.210467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.210807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.210845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.223932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.224326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.237614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.237952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.237984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.251125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.251454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.251486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.264511] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.264846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.264879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.636 [2024-04-15 02:04:17.278010] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.636 [2024-04-15 02:04:17.278338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.636 [2024-04-15 02:04:17.278383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.291444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.895 [2024-04-15 02:04:17.291783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.895 [2024-04-15 02:04:17.291815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.304975] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.895 [2024-04-15 02:04:17.305316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.895 [2024-04-15 02:04:17.305362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.318445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.895 [2024-04-15 02:04:17.318782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.895 [2024-04-15 02:04:17.318815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.331881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.895 [2024-04-15 02:04:17.332236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.895 [2024-04-15 02:04:17.332279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.345387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.895 [2024-04-15 02:04:17.345723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:10154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.895 [2024-04-15 02:04:17.345756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.895 [2024-04-15 02:04:17.358778] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.359132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.359173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.372245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.372588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.372619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.385682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.386021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.386060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.399067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.399424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.399457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.412598] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.412966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:2766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.412997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.426070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.426433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.426465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.439704] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.440055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22018 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.440100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.453259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.453608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.453640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.466925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.467262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.467292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.480384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.480723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:16649 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.480755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.493950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.494302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.494331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.507634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.507973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.508005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.521315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.521710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.521754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:31.896 [2024-04-15 02:04:17.534999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:31.896 [2024-04-15 02:04:17.535357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:31.896 [2024-04-15 02:04:17.535412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.548439] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.548805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.548837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.561874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.562226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.562273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.575408] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.575767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.575799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.588922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.589264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.589305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.602361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.602733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.602776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.615774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.616124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:22624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.616151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.629253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.629590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.629621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.642626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.642961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.642993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.656193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.656522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.656553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.669618] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.669955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:3550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.669986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.683213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.683553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:1063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.683584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.696643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.696982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.697013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.710115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.710486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.710518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.723613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.723953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.723984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.737123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.737464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.737496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.750658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.750991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.751022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.764080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.764430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.764461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.777540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.777877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:14503 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.155 [2024-04-15 02:04:17.777909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.155 [2024-04-15 02:04:17.791035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.155 [2024-04-15 02:04:17.791393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.156 [2024-04-15 02:04:17.791425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.414 [2024-04-15 02:04:17.804503] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.414 [2024-04-15 02:04:17.804840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.414 [2024-04-15 02:04:17.804872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.414 [2024-04-15 02:04:17.817858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.414 [2024-04-15 02:04:17.818195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.414 [2024-04-15 02:04:17.818237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.414 [2024-04-15 02:04:17.831277] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.414 [2024-04-15 02:04:17.831621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.414 [2024-04-15 02:04:17.831653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.414 [2024-04-15 02:04:17.844821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.414 [2024-04-15 02:04:17.845170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:18182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.414 [2024-04-15 02:04:17.845213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.858297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.858645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:11226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.858677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.871747] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.872091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.872124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.885263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.885605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.885636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.898809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.899187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:21820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.899231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.912321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.912673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:11784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.912711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.925870] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.926229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.926282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.939453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.939793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.939824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.952947] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.953281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.953310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.966480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.966818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.966850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.979961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.980326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:10264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.980355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:17.993537] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:17.993872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:17.993904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:18.007066] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:18.007423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:18.007455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:18.020713] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:18.021057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:18.021102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:18.034211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:18.034555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:18.034586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:18.047774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:18.048127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:18.048169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.415 [2024-04-15 02:04:18.061183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.415 [2024-04-15 02:04:18.061515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.415 [2024-04-15 02:04:18.061556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.074620] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.074988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18117 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.075030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.088096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.088447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.088479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.101548] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.101889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.101920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.114950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.115292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:15895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.115333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.128434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.128797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18477 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.128829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.141809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.142161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.142203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.155229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.155564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:10401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.155596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 [2024-04-15 02:04:18.168719] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c7830) with pdu=0x2000190e9168 00:29:32.674 [2024-04-15 02:04:18.169018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:32.674 [2024-04-15 02:04:18.169056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:32.674 00:29:32.674 Latency(us) 00:29:32.674 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.674 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:32.674 nvme0n1 : 2.01 19498.08 76.16 0.00 0.00 6550.75 3070.48 19709.35 00:29:32.674 =================================================================================================================== 00:29:32.674 Total : 19498.08 76.16 0.00 0.00 6550.75 3070.48 19709.35 00:29:32.674 0 00:29:32.674 02:04:18 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:32.674 02:04:18 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:32.674 02:04:18 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:32.674 02:04:18 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:32.675 | .driver_specific 00:29:32.675 | .nvme_error 00:29:32.675 | .status_code 00:29:32.675 | .command_transient_transport_error' 00:29:32.934 02:04:18 -- host/digest.sh@71 -- # (( 153 > 0 )) 00:29:32.934 02:04:18 -- host/digest.sh@73 -- # killprocess 2280889 00:29:32.934 02:04:18 -- common/autotest_common.sh@926 -- # '[' -z 2280889 ']' 00:29:32.934 02:04:18 -- common/autotest_common.sh@930 -- # kill -0 2280889 00:29:32.934 02:04:18 -- common/autotest_common.sh@931 -- # uname 00:29:32.934 02:04:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:32.934 02:04:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2280889 00:29:32.934 02:04:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:32.934 02:04:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:32.934 02:04:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2280889' 00:29:32.934 killing process with pid 2280889 00:29:32.934 02:04:18 -- common/autotest_common.sh@945 -- # kill 2280889 00:29:32.934 Received shutdown signal, test time was about 2.000000 seconds 00:29:32.934 00:29:32.934 Latency(us) 00:29:32.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:32.934 =================================================================================================================== 00:29:32.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:32.934 02:04:18 -- common/autotest_common.sh@950 -- # wait 2280889 00:29:33.192 02:04:18 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:29:33.192 02:04:18 -- host/digest.sh@54 -- # local rw bs qd 00:29:33.192 02:04:18 -- host/digest.sh@56 -- # rw=randwrite 00:29:33.192 02:04:18 -- host/digest.sh@56 -- # bs=131072 00:29:33.192 02:04:18 -- host/digest.sh@56 -- # qd=16 00:29:33.192 02:04:18 -- host/digest.sh@58 -- # bperfpid=2281422 00:29:33.192 02:04:18 -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:33.192 02:04:18 -- host/digest.sh@60 -- # waitforlisten 2281422 /var/tmp/bperf.sock 00:29:33.192 02:04:18 -- common/autotest_common.sh@819 -- # '[' -z 2281422 ']' 00:29:33.192 02:04:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:33.192 02:04:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:33.192 02:04:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:33.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:33.192 02:04:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:33.192 02:04:18 -- common/autotest_common.sh@10 -- # set +x 00:29:33.192 [2024-04-15 02:04:18.711634] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:33.192 [2024-04-15 02:04:18.711730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2281422 ] 00:29:33.192 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:33.192 Zero copy mechanism will not be used. 00:29:33.192 EAL: No free 2048 kB hugepages reported on node 1 00:29:33.192 [2024-04-15 02:04:18.774941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.452 [2024-04-15 02:04:18.862640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.019 02:04:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:34.019 02:04:19 -- common/autotest_common.sh@852 -- # return 0 00:29:34.019 02:04:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.019 02:04:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.284 02:04:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:34.284 02:04:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.284 02:04:19 -- common/autotest_common.sh@10 -- # set +x 00:29:34.285 02:04:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.285 02:04:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.285 02:04:19 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:34.616 nvme0n1 00:29:34.616 02:04:20 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:34.616 02:04:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:34.616 02:04:20 -- common/autotest_common.sh@10 -- # set +x 00:29:34.616 02:04:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:34.616 02:04:20 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:34.616 02:04:20 -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:34.880 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.880 Zero copy mechanism will not be used. 00:29:34.880 Running I/O for 2 seconds... 00:29:34.880 [2024-04-15 02:04:20.386738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:34.880 [2024-04-15 02:04:20.387336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.880 [2024-04-15 02:04:20.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.880 [2024-04-15 02:04:20.419831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:34.881 [2024-04-15 02:04:20.420686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.881 [2024-04-15 02:04:20.420722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.881 [2024-04-15 02:04:20.452692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:34.881 [2024-04-15 02:04:20.453513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.881 [2024-04-15 02:04:20.453554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.881 [2024-04-15 02:04:20.483338] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:34.881 [2024-04-15 02:04:20.484316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.881 [2024-04-15 02:04:20.484348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.881 [2024-04-15 02:04:20.517304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:34.881 [2024-04-15 02:04:20.518122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.881 [2024-04-15 02:04:20.518153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.549268] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.549944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.549974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.580641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.581729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.581760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.614630] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.615537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.615567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.647498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.648334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.648365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.679959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.680576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.680607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.711057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.711832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.711862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.743642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.744155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.744186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.141 [2024-04-15 02:04:20.774251] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.141 [2024-04-15 02:04:20.775210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.141 [2024-04-15 02:04:20.775240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.806515] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.807358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.807389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.840441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.841462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.841492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.873989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.874914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.874959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.905471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.906476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.906507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.938981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.939916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.939947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:20.972298] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:20.973064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:20.973095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:21.002077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:21.003123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:21.003153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.402 [2024-04-15 02:04:21.032933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.402 [2024-04-15 02:04:21.033935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.402 [2024-04-15 02:04:21.033964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.065529] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.066553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.066584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.096579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.097627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.097658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.128223] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.129341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.129386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.162616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.163591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.163621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.195430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.196314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.196359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.228484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.229631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.229660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.259601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.260237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.260267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.661 [2024-04-15 02:04:21.287663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.661 [2024-04-15 02:04:21.288581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.661 [2024-04-15 02:04:21.288614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.319549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.320229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.320259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.350498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.351612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.351641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.382588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.383261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.383291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.411018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.411929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.411959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.441773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.442411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.442441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.474305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.475096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.475125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.506880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.507650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.507680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.534336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.535294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.535322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:35.920 [2024-04-15 02:04:21.562101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:35.920 [2024-04-15 02:04:21.562910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.920 [2024-04-15 02:04:21.562940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.593179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.594127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.594157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.621542] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.622817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.622846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.650429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.651229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.651259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.681523] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.682400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.682429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.711848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.712708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.712737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.744474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.745212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.178 [2024-04-15 02:04:21.745241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.178 [2024-04-15 02:04:21.773467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.178 [2024-04-15 02:04:21.774022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.179 [2024-04-15 02:04:21.774057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.179 [2024-04-15 02:04:21.805211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.179 [2024-04-15 02:04:21.805960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.179 [2024-04-15 02:04:21.805988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:21.836123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:21.837098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:21.837143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:21.870450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:21.871123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:21.871152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:21.902211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:21.903315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:21.903345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:21.934188] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:21.934773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:21.934803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:21.966673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:21.967639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:21.967669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:22.000209] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:22.001097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:22.001126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:22.032151] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:22.033293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:22.033322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.437 [2024-04-15 02:04:22.064098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.437 [2024-04-15 02:04:22.064757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.437 [2024-04-15 02:04:22.064786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.097073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.097859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.097896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.127150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.128233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.128264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.158467] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.159431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.159474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.190791] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.191546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.191575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.223307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.223987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.224018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.253858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.254666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.254697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.287087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.287778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.315240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.697 [2024-04-15 02:04:22.316153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.697 [2024-04-15 02:04:22.316183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:36.697 [2024-04-15 02:04:22.343935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x19c79d0) with pdu=0x2000190fef90 00:29:36.957 [2024-04-15 02:04:22.344910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.957 [2024-04-15 02:04:22.344940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:36.957 00:29:36.957 Latency(us) 00:29:36.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:36.957 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:36.957 nvme0n1 : 2.02 984.23 123.03 0.00 0.00 16187.09 4587.52 35146.71 00:29:36.957 =================================================================================================================== 00:29:36.957 Total : 984.23 123.03 0.00 0.00 16187.09 4587.52 35146.71 00:29:36.957 0 00:29:36.957 02:04:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:36.957 02:04:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:36.957 02:04:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:36.957 | .driver_specific 00:29:36.957 | .nvme_error 00:29:36.957 | .status_code 00:29:36.957 | .command_transient_transport_error' 00:29:36.957 02:04:22 -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:37.217 02:04:22 -- host/digest.sh@71 -- # (( 63 > 0 )) 00:29:37.217 02:04:22 -- host/digest.sh@73 -- # killprocess 2281422 00:29:37.217 02:04:22 -- common/autotest_common.sh@926 -- # '[' -z 2281422 ']' 00:29:37.217 02:04:22 -- common/autotest_common.sh@930 -- # kill -0 2281422 00:29:37.217 02:04:22 -- common/autotest_common.sh@931 -- # uname 00:29:37.217 02:04:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:37.217 02:04:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2281422 00:29:37.217 02:04:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:29:37.217 02:04:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:29:37.217 02:04:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2281422' 00:29:37.217 killing process with pid 2281422 00:29:37.217 02:04:22 -- common/autotest_common.sh@945 -- # kill 2281422 00:29:37.217 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.217 00:29:37.217 Latency(us) 00:29:37.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.217 =================================================================================================================== 00:29:37.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.217 02:04:22 -- common/autotest_common.sh@950 -- # wait 2281422 00:29:37.476 02:04:22 -- host/digest.sh@115 -- # killprocess 2279761 00:29:37.476 02:04:22 -- common/autotest_common.sh@926 -- # '[' -z 2279761 ']' 00:29:37.476 02:04:22 -- common/autotest_common.sh@930 -- # kill -0 2279761 00:29:37.476 02:04:22 -- common/autotest_common.sh@931 -- # uname 00:29:37.476 02:04:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:29:37.476 02:04:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2279761 00:29:37.476 02:04:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:29:37.476 02:04:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:29:37.476 02:04:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2279761' 00:29:37.476 killing process with pid 2279761 00:29:37.476 02:04:22 -- common/autotest_common.sh@945 -- # kill 2279761 00:29:37.476 02:04:22 -- common/autotest_common.sh@950 -- # wait 2279761 00:29:37.735 00:29:37.735 real 0m17.564s 00:29:37.735 user 0m36.254s 00:29:37.735 sys 0m3.831s 00:29:37.735 02:04:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:37.735 02:04:23 -- common/autotest_common.sh@10 -- # set +x 00:29:37.735 ************************************ 00:29:37.735 END TEST nvmf_digest_error 00:29:37.735 ************************************ 00:29:37.735 02:04:23 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:29:37.735 02:04:23 -- host/digest.sh@139 -- # nvmftestfini 00:29:37.735 02:04:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:37.735 02:04:23 -- nvmf/common.sh@116 -- # sync 00:29:37.735 02:04:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:29:37.735 02:04:23 -- nvmf/common.sh@119 -- # set +e 00:29:37.735 02:04:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:37.735 02:04:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:29:37.735 rmmod nvme_tcp 00:29:37.735 rmmod nvme_fabrics 00:29:37.735 rmmod nvme_keyring 00:29:37.735 02:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:37.735 02:04:23 -- nvmf/common.sh@123 -- # set -e 00:29:37.735 02:04:23 -- nvmf/common.sh@124 -- # return 0 00:29:37.735 02:04:23 -- nvmf/common.sh@477 -- # '[' -n 2279761 ']' 00:29:37.735 02:04:23 -- nvmf/common.sh@478 -- # killprocess 2279761 00:29:37.735 02:04:23 -- common/autotest_common.sh@926 -- # '[' -z 2279761 ']' 00:29:37.735 02:04:23 -- common/autotest_common.sh@930 -- # kill -0 2279761 00:29:37.735 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2279761) - No such process 00:29:37.735 02:04:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2279761 is not found' 00:29:37.735 Process with pid 2279761 is not found 00:29:37.735 02:04:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:37.735 02:04:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:29:37.735 02:04:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:29:37.735 02:04:23 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:37.735 02:04:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:29:37.735 02:04:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.735 02:04:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:37.735 02:04:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.640 02:04:25 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:29:39.640 00:29:39.640 real 0m36.995s 00:29:39.640 user 1m7.523s 00:29:39.640 sys 0m9.061s 00:29:39.640 02:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:39.640 02:04:25 -- common/autotest_common.sh@10 -- # set +x 00:29:39.640 ************************************ 00:29:39.640 END TEST nvmf_digest 00:29:39.640 ************************************ 00:29:39.640 02:04:25 -- nvmf/nvmf.sh@109 -- # [[ 0 -eq 1 ]] 00:29:39.640 02:04:25 -- nvmf/nvmf.sh@114 -- # [[ 0 -eq 1 ]] 00:29:39.640 02:04:25 -- nvmf/nvmf.sh@119 -- # [[ phy == phy ]] 00:29:39.640 02:04:25 -- nvmf/nvmf.sh@121 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.640 02:04:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:29:39.640 02:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:29:39.640 02:04:25 -- common/autotest_common.sh@10 -- # set +x 00:29:39.640 ************************************ 00:29:39.640 START TEST nvmf_bdevperf 00:29:39.640 ************************************ 00:29:39.640 02:04:25 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:39.898 * Looking for test storage... 00:29:39.898 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:39.898 02:04:25 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:39.898 02:04:25 -- nvmf/common.sh@7 -- # uname -s 00:29:39.898 02:04:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:39.898 02:04:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:39.898 02:04:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:39.898 02:04:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:39.898 02:04:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:39.898 02:04:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:39.898 02:04:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:39.898 02:04:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:39.898 02:04:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:39.898 02:04:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:39.898 02:04:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.898 02:04:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:29:39.898 02:04:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:39.898 02:04:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:39.898 02:04:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:39.898 02:04:25 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:39.898 02:04:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:39.898 02:04:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:39.898 02:04:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:39.898 02:04:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.898 02:04:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.898 02:04:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.898 02:04:25 -- paths/export.sh@5 -- # export PATH 00:29:39.898 02:04:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:39.898 02:04:25 -- nvmf/common.sh@46 -- # : 0 00:29:39.898 02:04:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:39.898 02:04:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:39.898 02:04:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:39.898 02:04:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:39.898 02:04:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:39.898 02:04:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:39.898 02:04:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:39.898 02:04:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:39.898 02:04:25 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:39.898 02:04:25 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:39.898 02:04:25 -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:39.898 02:04:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:29:39.898 02:04:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:39.898 02:04:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:39.898 02:04:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:39.898 02:04:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:39.898 02:04:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:39.898 02:04:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:39.898 02:04:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:39.898 02:04:25 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:39.898 02:04:25 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:39.898 02:04:25 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:39.898 02:04:25 -- common/autotest_common.sh@10 -- # set +x 00:29:41.806 02:04:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:41.806 02:04:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:41.806 02:04:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:41.806 02:04:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:41.806 02:04:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:41.806 02:04:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:41.806 02:04:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:41.806 02:04:27 -- nvmf/common.sh@294 -- # net_devs=() 00:29:41.806 02:04:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:41.806 02:04:27 -- nvmf/common.sh@295 -- # e810=() 00:29:41.806 02:04:27 -- nvmf/common.sh@295 -- # local -ga e810 00:29:41.806 02:04:27 -- nvmf/common.sh@296 -- # x722=() 00:29:41.806 02:04:27 -- nvmf/common.sh@296 -- # local -ga x722 00:29:41.806 02:04:27 -- nvmf/common.sh@297 -- # mlx=() 00:29:41.806 02:04:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:41.806 02:04:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:41.806 02:04:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:41.806 02:04:27 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:29:41.806 02:04:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:41.806 02:04:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:41.806 02:04:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:29:41.806 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:29:41.806 02:04:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:41.806 02:04:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:29:41.806 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:29:41.806 02:04:27 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:41.806 02:04:27 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:29:41.806 02:04:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:41.807 02:04:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.807 02:04:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:41.807 02:04:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.807 02:04:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:29:41.807 Found net devices under 0000:0a:00.0: cvl_0_0 00:29:41.807 02:04:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.807 02:04:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:41.807 02:04:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:41.807 02:04:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:41.807 02:04:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:41.807 02:04:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:29:41.807 Found net devices under 0000:0a:00.1: cvl_0_1 00:29:41.807 02:04:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:41.807 02:04:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:41.807 02:04:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:41.807 02:04:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:41.807 02:04:27 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:29:41.807 02:04:27 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:29:41.807 02:04:27 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:41.807 02:04:27 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:41.807 02:04:27 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:41.807 02:04:27 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:29:41.807 02:04:27 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:41.807 02:04:27 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:41.807 02:04:27 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:29:41.807 02:04:27 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:41.807 02:04:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:41.807 02:04:27 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:29:41.807 02:04:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:29:41.807 02:04:27 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:29:41.807 02:04:27 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:41.807 02:04:27 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:41.807 02:04:27 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:41.807 02:04:27 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:29:41.807 02:04:27 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:41.807 02:04:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:41.807 02:04:27 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:41.807 02:04:27 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:29:41.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:41.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:29:41.807 00:29:41.807 --- 10.0.0.2 ping statistics --- 00:29:41.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.807 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:41.807 02:04:27 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:41.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:41.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:29:41.807 00:29:41.807 --- 10.0.0.1 ping statistics --- 00:29:41.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:41.807 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:29:41.807 02:04:27 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:41.807 02:04:27 -- nvmf/common.sh@410 -- # return 0 00:29:41.807 02:04:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:41.807 02:04:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:41.807 02:04:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:29:41.807 02:04:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:29:41.807 02:04:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:41.807 02:04:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:29:41.807 02:04:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:29:41.807 02:04:27 -- host/bdevperf.sh@25 -- # tgt_init 00:29:41.807 02:04:27 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:41.807 02:04:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:41.807 02:04:27 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:41.807 02:04:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 02:04:27 -- nvmf/common.sh@469 -- # nvmfpid=2283822 00:29:41.807 02:04:27 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:41.807 02:04:27 -- nvmf/common.sh@470 -- # waitforlisten 2283822 00:29:41.807 02:04:27 -- common/autotest_common.sh@819 -- # '[' -z 2283822 ']' 00:29:41.807 02:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:41.807 02:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:41.807 02:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:41.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:41.807 02:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:41.807 02:04:27 -- common/autotest_common.sh@10 -- # set +x 00:29:41.807 [2024-04-15 02:04:27.347559] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:41.807 [2024-04-15 02:04:27.347645] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:41.807 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.807 [2024-04-15 02:04:27.414188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:42.066 [2024-04-15 02:04:27.501942] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:42.066 [2024-04-15 02:04:27.502113] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:42.066 [2024-04-15 02:04:27.502132] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:42.066 [2024-04-15 02:04:27.502162] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:42.066 [2024-04-15 02:04:27.502220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:42.066 [2024-04-15 02:04:27.502281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:42.066 [2024-04-15 02:04:27.502283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.004 02:04:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:43.005 02:04:28 -- common/autotest_common.sh@852 -- # return 0 00:29:43.005 02:04:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:43.005 02:04:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 02:04:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.005 02:04:28 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.005 02:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 [2024-04-15 02:04:28.339663] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.005 02:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.005 02:04:28 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.005 02:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 Malloc0 00:29:43.005 02:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.005 02:04:28 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.005 02:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 02:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.005 02:04:28 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.005 02:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 02:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.005 02:04:28 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.005 02:04:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:43.005 02:04:28 -- common/autotest_common.sh@10 -- # set +x 00:29:43.005 [2024-04-15 02:04:28.402832] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.005 02:04:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:43.005 02:04:28 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:43.005 02:04:28 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:43.005 02:04:28 -- nvmf/common.sh@520 -- # config=() 00:29:43.005 02:04:28 -- nvmf/common.sh@520 -- # local subsystem config 00:29:43.005 02:04:28 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:43.005 02:04:28 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:43.005 { 00:29:43.005 "params": { 00:29:43.005 "name": "Nvme$subsystem", 00:29:43.005 "trtype": "$TEST_TRANSPORT", 00:29:43.005 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:43.005 "adrfam": "ipv4", 00:29:43.005 "trsvcid": "$NVMF_PORT", 00:29:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:43.005 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:43.005 "hdgst": ${hdgst:-false}, 00:29:43.005 "ddgst": ${ddgst:-false} 00:29:43.005 }, 00:29:43.005 "method": "bdev_nvme_attach_controller" 00:29:43.005 } 00:29:43.005 EOF 00:29:43.005 )") 00:29:43.005 02:04:28 -- nvmf/common.sh@542 -- # cat 00:29:43.005 02:04:28 -- nvmf/common.sh@544 -- # jq . 00:29:43.005 02:04:28 -- nvmf/common.sh@545 -- # IFS=, 00:29:43.005 02:04:28 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:43.005 "params": { 00:29:43.005 "name": "Nvme1", 00:29:43.005 "trtype": "tcp", 00:29:43.005 "traddr": "10.0.0.2", 00:29:43.005 "adrfam": "ipv4", 00:29:43.005 "trsvcid": "4420", 00:29:43.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:43.005 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:43.005 "hdgst": false, 00:29:43.005 "ddgst": false 00:29:43.005 }, 00:29:43.005 "method": "bdev_nvme_attach_controller" 00:29:43.005 }' 00:29:43.005 [2024-04-15 02:04:28.451747] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:43.005 [2024-04-15 02:04:28.451829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2283978 ] 00:29:43.005 EAL: No free 2048 kB hugepages reported on node 1 00:29:43.005 [2024-04-15 02:04:28.513730] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:43.005 [2024-04-15 02:04:28.603516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:43.263 Running I/O for 1 seconds... 00:29:44.642 00:29:44.642 Latency(us) 00:29:44.642 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.642 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:44.642 Verification LBA range: start 0x0 length 0x4000 00:29:44.642 Nvme1n1 : 1.01 13380.40 52.27 0.00 0.00 9519.31 1371.40 16311.18 00:29:44.642 =================================================================================================================== 00:29:44.642 Total : 13380.40 52.27 0.00 0.00 9519.31 1371.40 16311.18 00:29:44.642 02:04:30 -- host/bdevperf.sh@30 -- # bdevperfpid=2284246 00:29:44.642 02:04:30 -- host/bdevperf.sh@32 -- # sleep 3 00:29:44.642 02:04:30 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:44.642 02:04:30 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:44.642 02:04:30 -- nvmf/common.sh@520 -- # config=() 00:29:44.642 02:04:30 -- nvmf/common.sh@520 -- # local subsystem config 00:29:44.642 02:04:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:29:44.642 02:04:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:29:44.642 { 00:29:44.642 "params": { 00:29:44.642 "name": "Nvme$subsystem", 00:29:44.642 "trtype": "$TEST_TRANSPORT", 00:29:44.642 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:44.642 "adrfam": "ipv4", 00:29:44.642 "trsvcid": "$NVMF_PORT", 00:29:44.642 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:44.642 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:44.642 "hdgst": ${hdgst:-false}, 00:29:44.642 "ddgst": ${ddgst:-false} 00:29:44.642 }, 00:29:44.642 "method": "bdev_nvme_attach_controller" 00:29:44.642 } 00:29:44.642 EOF 00:29:44.642 )") 00:29:44.642 02:04:30 -- nvmf/common.sh@542 -- # cat 00:29:44.642 02:04:30 -- nvmf/common.sh@544 -- # jq . 00:29:44.642 02:04:30 -- nvmf/common.sh@545 -- # IFS=, 00:29:44.642 02:04:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:29:44.642 "params": { 00:29:44.642 "name": "Nvme1", 00:29:44.642 "trtype": "tcp", 00:29:44.642 "traddr": "10.0.0.2", 00:29:44.642 "adrfam": "ipv4", 00:29:44.642 "trsvcid": "4420", 00:29:44.642 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.642 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:44.642 "hdgst": false, 00:29:44.642 "ddgst": false 00:29:44.642 }, 00:29:44.642 "method": "bdev_nvme_attach_controller" 00:29:44.642 }' 00:29:44.642 [2024-04-15 02:04:30.191301] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:44.642 [2024-04-15 02:04:30.191405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284246 ] 00:29:44.642 EAL: No free 2048 kB hugepages reported on node 1 00:29:44.642 [2024-04-15 02:04:30.253581] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:44.901 [2024-04-15 02:04:30.340601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:44.901 Running I/O for 15 seconds... 00:29:48.194 02:04:33 -- host/bdevperf.sh@33 -- # kill -9 2283822 00:29:48.194 02:04:33 -- host/bdevperf.sh@35 -- # sleep 3 00:29:48.194 [2024-04-15 02:04:33.161605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.161980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:17152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.161995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:17160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:17808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.194 [2024-04-15 02:04:33.162785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.194 [2024-04-15 02:04:33.162869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.194 [2024-04-15 02:04:33.162884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.162901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.162916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.162933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:17880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.162949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.162965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.162980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.162997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:17904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:17952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:18016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.163983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.163999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:18072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.164015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.164055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.164109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:18096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.164140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.164175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.195 [2024-04-15 02:04:33.164203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.195 [2024-04-15 02:04:33.164218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:18120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.195 [2024-04-15 02:04:33.164232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:18160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:18168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:18184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.164797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:17648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.164983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.164999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:18216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:18224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.165219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:18240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.165306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:18256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.165350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.165389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.196 [2024-04-15 02:04:33.165488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.196 [2024-04-15 02:04:33.165560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.196 [2024-04-15 02:04:33.165577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:18312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:18336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.197 [2024-04-15 02:04:33.165694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.197 [2024-04-15 02:04:33.165726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:48.197 [2024-04-15 02:04:33.165758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:17792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:48.197 [2024-04-15 02:04:33.165950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.165966] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e4f590 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.165984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:48.197 [2024-04-15 02:04:33.165997] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:48.197 [2024-04-15 02:04:33.166011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17800 len:8 PRP1 0x0 PRP2 0x0 00:29:48.197 [2024-04-15 02:04:33.166030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.166122] bdev_nvme.c:1576:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1e4f590 was disconnected and freed. reset controller. 00:29:48.197 [2024-04-15 02:04:33.166184] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.197 [2024-04-15 02:04:33.166206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.166222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.197 [2024-04-15 02:04:33.166235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.166249] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.197 [2024-04-15 02:04:33.166262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.166276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:48.197 [2024-04-15 02:04:33.166289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:48.197 [2024-04-15 02:04:33.166302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.168596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.197 [2024-04-15 02:04:33.168638] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.169236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.169472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.169498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.197 [2024-04-15 02:04:33.169529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.169687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.169876] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.197 [2024-04-15 02:04:33.169899] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.197 [2024-04-15 02:04:33.169916] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.197 [2024-04-15 02:04:33.172543] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.197 [2024-04-15 02:04:33.181505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.197 [2024-04-15 02:04:33.181937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.182193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.182236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.197 [2024-04-15 02:04:33.182252] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.182448] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.182616] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.197 [2024-04-15 02:04:33.182647] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.197 [2024-04-15 02:04:33.182663] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.197 [2024-04-15 02:04:33.184968] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.197 [2024-04-15 02:04:33.194236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.197 [2024-04-15 02:04:33.194655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.194919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.194950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.197 [2024-04-15 02:04:33.194968] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.195149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.195321] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.197 [2024-04-15 02:04:33.195346] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.197 [2024-04-15 02:04:33.195362] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.197 [2024-04-15 02:04:33.197613] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.197 [2024-04-15 02:04:33.206791] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.197 [2024-04-15 02:04:33.207271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.207495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.207527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.197 [2024-04-15 02:04:33.207546] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.207714] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.207866] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.197 [2024-04-15 02:04:33.207891] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.197 [2024-04-15 02:04:33.207907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.197 [2024-04-15 02:04:33.210204] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.197 [2024-04-15 02:04:33.219194] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.197 [2024-04-15 02:04:33.219644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.219887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.197 [2024-04-15 02:04:33.219918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.197 [2024-04-15 02:04:33.219936] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.197 [2024-04-15 02:04:33.220116] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.197 [2024-04-15 02:04:33.220269] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.197 [2024-04-15 02:04:33.220295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.197 [2024-04-15 02:04:33.220317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.197 [2024-04-15 02:04:33.222639] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.231639] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.232080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.232340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.232370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.232388] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.232537] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.232634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.232658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.232673] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.235032] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.244200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.244660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.244911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.244942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.244960] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.245140] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.245276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.245301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.245317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.247620] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.256944] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.257387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.257663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.257692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.257710] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.257859] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.257992] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.258016] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.258031] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.260247] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.269604] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.270081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.270369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.270399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.270417] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.270620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.270808] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.270834] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.270850] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.273148] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.281990] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.282426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.282705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.282735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.282754] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.282921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.283123] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.283149] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.283165] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.285520] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.294566] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.294975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.295224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.295256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.295275] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.295478] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.295648] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.295673] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.295689] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.297920] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.307336] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.307700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.307953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.307983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.308001] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.308252] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.308425] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.198 [2024-04-15 02:04:33.308451] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.198 [2024-04-15 02:04:33.308466] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.198 [2024-04-15 02:04:33.310841] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.198 [2024-04-15 02:04:33.320000] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.198 [2024-04-15 02:04:33.320389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.320669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.198 [2024-04-15 02:04:33.320699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.198 [2024-04-15 02:04:33.320717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.198 [2024-04-15 02:04:33.320901] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.198 [2024-04-15 02:04:33.321085] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.321120] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.321137] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.323619] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.332697] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.333114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.333391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.333421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.333439] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.333607] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.333795] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.333820] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.333836] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.336119] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.345398] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.345875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.346153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.346184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.346202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.346369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.346522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.346547] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.346563] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.348890] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.358087] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.358524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.358772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.358802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.358820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.358969] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.359171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.359198] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.359214] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.361712] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.370610] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.371043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.371269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.371299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.371317] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.371430] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.371582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.371606] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.371621] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.374002] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.383315] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.383764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.384038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.384085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.384105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.384254] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.384405] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.384429] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.384444] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.386785] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.395901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.396383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.396865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.396916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.396934] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.397129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.397372] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.397397] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.397413] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.399663] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.408425] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.408865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.409130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.409158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.409174] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.409381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.409566] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.409590] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.409606] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.411859] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.420966] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.421400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.421665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.421691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.421714] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.421870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.422035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.422093] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.422108] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.424180] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.199 [2024-04-15 02:04:33.433761] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.199 [2024-04-15 02:04:33.434149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.434377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.199 [2024-04-15 02:04:33.434408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.199 [2024-04-15 02:04:33.434426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.199 [2024-04-15 02:04:33.434593] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.199 [2024-04-15 02:04:33.434775] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.199 [2024-04-15 02:04:33.434799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.199 [2024-04-15 02:04:33.434815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.199 [2024-04-15 02:04:33.437235] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.446313] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.447011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.447290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.447319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.447337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.447484] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.447690] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.447715] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.447731] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.450035] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.458895] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.459360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.459618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.459661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.459677] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.459887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.460090] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.460125] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.460140] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.462463] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.471382] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.471894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.472310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.472341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.472359] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.472508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.472720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.472745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.472761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.475011] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.483941] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.484363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.484849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.484902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.484920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.485098] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.485306] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.485331] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.485347] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.487866] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.496662] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.497113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.497388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.497417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.497435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.497546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.497705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.497730] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.497745] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.500094] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.509343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.509753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.510162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.510194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.510212] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.510396] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.510602] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.510628] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.510644] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.513038] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.521914] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.522356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.522748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.522802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.522820] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.522950] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.523147] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.523173] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.523188] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.525473] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.534439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.535127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.535408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.535438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.535456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.535640] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.535811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.535842] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.535859] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.538205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.547119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.547761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.548164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.548195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.200 [2024-04-15 02:04:33.548213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.200 [2024-04-15 02:04:33.548362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.200 [2024-04-15 02:04:33.548534] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.200 [2024-04-15 02:04:33.548560] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.200 [2024-04-15 02:04:33.548576] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.200 [2024-04-15 02:04:33.550914] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.200 [2024-04-15 02:04:33.559600] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.200 [2024-04-15 02:04:33.560017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.560264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.200 [2024-04-15 02:04:33.560291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.560307] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.560496] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.560702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.560728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.560744] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.563015] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.572396] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.572994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.573305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.573347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.573365] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.573514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.573683] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.573709] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.573730] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.576067] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.584872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.585387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.585611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.585641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.585659] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.585790] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.585888] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.585911] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.585927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.588163] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.597352] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.597808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.598061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.598109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.598125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.598276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.598471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.598496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.598512] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.600774] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.610016] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.610569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.611106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.611136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.611153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.611373] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.611580] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.611605] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.611622] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.613825] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.622690] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.623088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.623365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.623395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.623413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.623633] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.623787] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.623813] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.623829] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.626330] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.635198] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.635630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.636098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.636123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.636138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.636336] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.636470] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.636493] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.636509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.638761] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.647824] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.648183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.648405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.648432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.648450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.648634] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.648823] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.648848] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.648864] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.651260] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.660475] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.660878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.661166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.661194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.661225] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.661479] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.661634] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.661659] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.661676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.201 [2024-04-15 02:04:33.663874] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.201 [2024-04-15 02:04:33.673146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.201 [2024-04-15 02:04:33.673551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.674016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.201 [2024-04-15 02:04:33.674081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.201 [2024-04-15 02:04:33.674099] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.201 [2024-04-15 02:04:33.674265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.201 [2024-04-15 02:04:33.674434] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.201 [2024-04-15 02:04:33.674457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.201 [2024-04-15 02:04:33.674473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.676793] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.685648] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.686141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.686423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.686452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.686470] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.686654] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.686842] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.686867] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.686884] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.689273] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.698214] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.698637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.699108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.699139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.699157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.699305] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.699510] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.699534] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.699549] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.701875] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.710813] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.711218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.711565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.711594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.711611] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.711759] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.711982] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.712006] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.712021] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.714325] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.723324] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.723821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.724100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.724131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.724149] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.724315] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.724467] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.724493] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.724509] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.726867] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.735912] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.736343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.736813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.736867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.736886] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.737064] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.737247] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.737271] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.737287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.739484] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.748647] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.749024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.749286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.749317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.749336] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.749485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.749637] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.749662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.749678] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.752104] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.761114] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.761513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.761816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.761842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.761872] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.762024] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.762223] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.762248] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.762264] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.764732] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.773894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.774266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.774516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.774545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.774569] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.774718] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.774870] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.774895] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.774911] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.777246] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.786361] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.786785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.787030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.787072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.202 [2024-04-15 02:04:33.787092] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.202 [2024-04-15 02:04:33.787312] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.202 [2024-04-15 02:04:33.787465] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.202 [2024-04-15 02:04:33.787491] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.202 [2024-04-15 02:04:33.787506] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.202 [2024-04-15 02:04:33.789736] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.202 [2024-04-15 02:04:33.798828] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.202 [2024-04-15 02:04:33.799211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.202 [2024-04-15 02:04:33.799466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.203 [2024-04-15 02:04:33.799496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.203 [2024-04-15 02:04:33.799514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.203 [2024-04-15 02:04:33.799663] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.203 [2024-04-15 02:04:33.799797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.203 [2024-04-15 02:04:33.799822] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.203 [2024-04-15 02:04:33.799838] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.203 [2024-04-15 02:04:33.802233] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.203 [2024-04-15 02:04:33.811486] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.203 [2024-04-15 02:04:33.811921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.203 [2024-04-15 02:04:33.812195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.203 [2024-04-15 02:04:33.812223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.203 [2024-04-15 02:04:33.812240] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.203 [2024-04-15 02:04:33.812434] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.203 [2024-04-15 02:04:33.812569] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.203 [2024-04-15 02:04:33.812595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.203 [2024-04-15 02:04:33.812611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.203 [2024-04-15 02:04:33.814900] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.203 [2024-04-15 02:04:33.823932] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.203 [2024-04-15 02:04:33.824335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.203 [2024-04-15 02:04:33.824662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.203 [2024-04-15 02:04:33.824691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.203 [2024-04-15 02:04:33.824708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.203 [2024-04-15 02:04:33.824874] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.203 [2024-04-15 02:04:33.825007] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.203 [2024-04-15 02:04:33.825031] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.203 [2024-04-15 02:04:33.825059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.203 [2024-04-15 02:04:33.827438] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.836530] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.836983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.837217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.837247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.837265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.837450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.837656] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.837681] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.837697] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.839985] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.848880] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.849305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.849810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.849859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.849877] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.850007] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.850178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.850203] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.850219] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.852490] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.861732] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.862185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.862621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.862673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.862691] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.862875] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.863009] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.863033] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.863060] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.865368] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.874299] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.874727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.874976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.875006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.875024] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.875149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.875337] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.875361] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.875377] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.877592] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.887015] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.887430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.887865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.887914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.887932] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.888095] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.888248] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.888278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.888294] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.890544] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.899676] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.900125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.900368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.900409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.900425] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.900557] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.900692] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.900716] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.900732] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.903088] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.912350] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.912784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.913067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.913098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.465 [2024-04-15 02:04:33.913116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.465 [2024-04-15 02:04:33.913283] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.465 [2024-04-15 02:04:33.913488] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.465 [2024-04-15 02:04:33.913514] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.465 [2024-04-15 02:04:33.913530] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.465 [2024-04-15 02:04:33.915886] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.465 [2024-04-15 02:04:33.924993] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.465 [2024-04-15 02:04:33.925444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.465 [2024-04-15 02:04:33.925911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.925960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.925978] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.926172] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.926361] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.926385] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.926406] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.928691] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:33.937558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:33.938156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.938430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.938457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.938472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.938606] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.938775] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.938801] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.938817] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.941116] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:33.950023] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:33.950485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.950779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.950806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.950835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.951032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.951216] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.951241] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.951256] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.953556] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:33.962785] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:33.963218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.963465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.963495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.963513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.963661] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.963833] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.963858] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.963873] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.966188] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:33.975403] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:33.975801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.976098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.976125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.976156] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.976309] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.976477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.976502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.976517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.978785] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:33.987985] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:33.988383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.988872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:33.988922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:33.988940] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:33.989151] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:33.989287] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:33.989311] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:33.989326] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:33.991554] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:34.000681] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:34.001199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.001471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.001501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:34.001519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:34.001704] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:34.001857] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:34.001882] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:34.001898] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:34.004377] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:34.013096] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:34.013537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.014007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.014068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:34.014088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:34.014236] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:34.014442] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:34.014466] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:34.014481] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:34.016801] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:34.025581] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:34.025999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.026294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.026334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:34.026353] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:34.026501] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:34.026673] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.466 [2024-04-15 02:04:34.026697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.466 [2024-04-15 02:04:34.026713] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.466 [2024-04-15 02:04:34.029101] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.466 [2024-04-15 02:04:34.038383] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.466 [2024-04-15 02:04:34.038825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.039191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.466 [2024-04-15 02:04:34.039221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.466 [2024-04-15 02:04:34.039239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.466 [2024-04-15 02:04:34.039422] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.466 [2024-04-15 02:04:34.039575] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.039600] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.039616] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.041790] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.467 [2024-04-15 02:04:34.051040] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.467 [2024-04-15 02:04:34.051488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.051727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.051767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.467 [2024-04-15 02:04:34.051783] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.467 [2024-04-15 02:04:34.051947] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.467 [2024-04-15 02:04:34.052134] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.052160] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.052177] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.054440] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.467 [2024-04-15 02:04:34.063484] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.467 [2024-04-15 02:04:34.063983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.064286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.064312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.467 [2024-04-15 02:04:34.064329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.467 [2024-04-15 02:04:34.064508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.467 [2024-04-15 02:04:34.064720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.064745] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.064761] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.066969] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.467 [2024-04-15 02:04:34.076089] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.467 [2024-04-15 02:04:34.076470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.076814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.076843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.467 [2024-04-15 02:04:34.076860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.467 [2024-04-15 02:04:34.077044] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.467 [2024-04-15 02:04:34.077226] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.077250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.077266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.079495] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.467 [2024-04-15 02:04:34.088659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.467 [2024-04-15 02:04:34.089096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.089388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.089415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.467 [2024-04-15 02:04:34.089431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.467 [2024-04-15 02:04:34.089590] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.467 [2024-04-15 02:04:34.089760] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.089785] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.089801] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.092168] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.467 [2024-04-15 02:04:34.101344] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.467 [2024-04-15 02:04:34.101796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.102056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.467 [2024-04-15 02:04:34.102087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.467 [2024-04-15 02:04:34.102105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.467 [2024-04-15 02:04:34.102255] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.467 [2024-04-15 02:04:34.102479] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.467 [2024-04-15 02:04:34.102504] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.467 [2024-04-15 02:04:34.102519] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.467 [2024-04-15 02:04:34.104865] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.113929] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.114349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.114864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.114916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.114933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.115146] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.115354] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.115379] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.115394] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.117695] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.126356] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.127010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.127335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.127365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.127389] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.127573] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.127726] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.127751] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.127767] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.130064] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.138698] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.139147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.139401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.139432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.139450] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.139543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.139732] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.139757] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.139773] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.142070] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.151330] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.151834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.152080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.152110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.152128] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.152293] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.152464] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.152489] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.152505] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.154754] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.164156] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.164569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.164852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.164881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.164898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.165063] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.165252] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.165278] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.165293] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.167664] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.176572] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.177154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.177382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.177411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.177429] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.177594] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.177747] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.177771] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.177787] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.180117] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.189196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.189637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.189886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.189915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.189933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.190094] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.190302] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.190327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.190342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.192717] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.201606] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.727 [2024-04-15 02:04:34.202031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.202325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.727 [2024-04-15 02:04:34.202355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.727 [2024-04-15 02:04:34.202373] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.727 [2024-04-15 02:04:34.202520] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.727 [2024-04-15 02:04:34.202700] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.727 [2024-04-15 02:04:34.202725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.727 [2024-04-15 02:04:34.202740] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.727 [2024-04-15 02:04:34.205268] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.727 [2024-04-15 02:04:34.214007] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.214461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.214938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.214990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.215008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.215202] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.215392] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.215417] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.215434] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.217736] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.226556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.226976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.227253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.227284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.227302] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.227450] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.227603] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.227627] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.227642] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.229874] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.239185] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.239566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.240091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.240121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.240139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.240286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.240403] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.240433] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.240449] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.242840] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.251507] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.252113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.252373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.252402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.252420] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.252531] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.252701] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.252725] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.252741] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.255124] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.264123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.264587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.264878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.264907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.264925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.265121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.265291] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.265316] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.265332] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.267939] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.276627] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.277182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.277434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.277466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.277484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.277615] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.277768] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.277792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.277813] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.280251] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.289431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.289836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.290089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.290120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.290138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.290358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.290475] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.290499] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.290515] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.292752] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.302029] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.302492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.302946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.302997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.303015] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.303192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.303364] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.303389] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.303404] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.305762] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.314558] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.314935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.315192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.315220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.315236] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.728 [2024-04-15 02:04:34.315408] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.728 [2024-04-15 02:04:34.315620] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.728 [2024-04-15 02:04:34.315644] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.728 [2024-04-15 02:04:34.315660] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.728 [2024-04-15 02:04:34.318040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.728 [2024-04-15 02:04:34.327022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.728 [2024-04-15 02:04:34.327463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.327915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.728 [2024-04-15 02:04:34.327963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.728 [2024-04-15 02:04:34.327981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.729 [2024-04-15 02:04:34.328176] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.729 [2024-04-15 02:04:34.328347] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.729 [2024-04-15 02:04:34.328380] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.729 [2024-04-15 02:04:34.328396] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.729 [2024-04-15 02:04:34.330743] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.729 [2024-04-15 02:04:34.339689] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.729 [2024-04-15 02:04:34.340168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.340451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.340480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.729 [2024-04-15 02:04:34.340498] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.729 [2024-04-15 02:04:34.340646] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.729 [2024-04-15 02:04:34.340834] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.729 [2024-04-15 02:04:34.340858] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.729 [2024-04-15 02:04:34.340874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.729 [2024-04-15 02:04:34.343276] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.729 [2024-04-15 02:04:34.352419] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.729 [2024-04-15 02:04:34.352844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.353079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.353118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.729 [2024-04-15 02:04:34.353137] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.729 [2024-04-15 02:04:34.353302] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.729 [2024-04-15 02:04:34.353498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.729 [2024-04-15 02:04:34.353523] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.729 [2024-04-15 02:04:34.353538] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.729 [2024-04-15 02:04:34.355823] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.729 [2024-04-15 02:04:34.365009] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.729 [2024-04-15 02:04:34.365426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.365752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.729 [2024-04-15 02:04:34.365798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.729 [2024-04-15 02:04:34.365816] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.729 [2024-04-15 02:04:34.365982] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.729 [2024-04-15 02:04:34.366165] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.729 [2024-04-15 02:04:34.366190] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.729 [2024-04-15 02:04:34.366206] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.729 [2024-04-15 02:04:34.368648] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.989 [2024-04-15 02:04:34.377464] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.989 [2024-04-15 02:04:34.377956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.378207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.378238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.989 [2024-04-15 02:04:34.378256] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.989 [2024-04-15 02:04:34.378439] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.989 [2024-04-15 02:04:34.378646] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.989 [2024-04-15 02:04:34.378670] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.989 [2024-04-15 02:04:34.378686] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.989 [2024-04-15 02:04:34.380935] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.989 [2024-04-15 02:04:34.389861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.989 [2024-04-15 02:04:34.390315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.390567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.390596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.989 [2024-04-15 02:04:34.390613] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.989 [2024-04-15 02:04:34.390832] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.989 [2024-04-15 02:04:34.390966] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.989 [2024-04-15 02:04:34.390990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.989 [2024-04-15 02:04:34.391006] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.989 [2024-04-15 02:04:34.393370] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.989 [2024-04-15 02:04:34.402253] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.989 [2024-04-15 02:04:34.402691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.403121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.403161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.989 [2024-04-15 02:04:34.403179] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.989 [2024-04-15 02:04:34.403308] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.989 [2024-04-15 02:04:34.403497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.989 [2024-04-15 02:04:34.403521] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.989 [2024-04-15 02:04:34.403537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.989 [2024-04-15 02:04:34.405819] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.989 [2024-04-15 02:04:34.414888] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.989 [2024-04-15 02:04:34.415303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.415817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.989 [2024-04-15 02:04:34.415867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.989 [2024-04-15 02:04:34.415884] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.989 [2024-04-15 02:04:34.416059] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.989 [2024-04-15 02:04:34.416259] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.989 [2024-04-15 02:04:34.416283] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.416299] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.418528] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.427617] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.428167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.428395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.428426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.428443] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.428591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.428762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.428787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.428803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.431076] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.439964] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.440412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.440918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.440969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.440986] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.441144] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.441352] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.441381] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.441397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.443699] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.452745] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.453214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.453471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.453496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.453511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.453681] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.453797] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.453821] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.453837] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.456388] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.465303] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.465707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.466166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.466196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.466213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.466343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.466549] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.466573] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.466589] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.469111] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.477894] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.478351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.478595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.478623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.478647] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.478777] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.478894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.478918] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.478934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.481306] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.490608] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.491037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.491324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.491353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.491371] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.491519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.491726] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.491750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.491766] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.494100] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.503172] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.503621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.503948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.503977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.503994] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.504198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.504387] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.504412] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.504427] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.506979] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.515781] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.516268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.516683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.516734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.516751] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.516905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.517070] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.517096] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.517111] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.519523] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.528095] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.528581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.529085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.529133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.990 [2024-04-15 02:04:34.529153] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.990 [2024-04-15 02:04:34.529282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.990 [2024-04-15 02:04:34.529416] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.990 [2024-04-15 02:04:34.529441] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.990 [2024-04-15 02:04:34.529456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.990 [2024-04-15 02:04:34.531794] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.990 [2024-04-15 02:04:34.540842] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.990 [2024-04-15 02:04:34.541279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.990 [2024-04-15 02:04:34.541567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.541592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.541622] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.541799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.542016] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.542041] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.542080] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.544487] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.553457] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.553845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.554113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.554143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.554161] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.554345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.554522] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.554547] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.554562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.556865] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.565870] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.566274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.567137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.567168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.567185] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.567354] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.567545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.567570] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.567586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.569549] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.578654] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.579130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.579347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.579373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.579390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.579527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.579716] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.579740] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.579756] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.582252] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.591227] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.591629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.591905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.591934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.591953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.592163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.592353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.592385] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.592409] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.594666] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.603793] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.604234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.604463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.604491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.604507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.604669] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.604850] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.604876] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.604891] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.607408] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.616377] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.616832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.617074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.617101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.617118] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.617299] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.617518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.617542] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.617558] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.619735] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:48.991 [2024-04-15 02:04:34.628839] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:48.991 [2024-04-15 02:04:34.629261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.629541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:48.991 [2024-04-15 02:04:34.629573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:48.991 [2024-04-15 02:04:34.629609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:48.991 [2024-04-15 02:04:34.629776] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:48.991 [2024-04-15 02:04:34.629910] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:48.991 [2024-04-15 02:04:34.629934] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:48.991 [2024-04-15 02:04:34.629955] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:48.991 [2024-04-15 02:04:34.632412] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.251 [2024-04-15 02:04:34.641279] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.251 [2024-04-15 02:04:34.641769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.251 [2024-04-15 02:04:34.642081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.251 [2024-04-15 02:04:34.642111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.251 [2024-04-15 02:04:34.642129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.251 [2024-04-15 02:04:34.642259] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.251 [2024-04-15 02:04:34.642430] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.251 [2024-04-15 02:04:34.642462] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.251 [2024-04-15 02:04:34.642477] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.251 [2024-04-15 02:04:34.644872] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.251 [2024-04-15 02:04:34.653872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.251 [2024-04-15 02:04:34.654287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.251 [2024-04-15 02:04:34.654611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.251 [2024-04-15 02:04:34.654636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.251 [2024-04-15 02:04:34.654652] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.251 [2024-04-15 02:04:34.654794] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.251 [2024-04-15 02:04:34.654998] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.655019] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.655059] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.657358] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.666760] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.667198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.667512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.667546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.667580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.667745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.667901] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.667921] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.667934] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.670202] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.679280] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.679767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.680067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.680111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.680127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.680325] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.680458] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.680482] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.680498] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.682759] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.691824] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.692238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.692598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.692645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.692663] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.692811] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.693000] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.693035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.693058] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.695463] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.704236] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.704684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.705006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.705061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.705081] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.705223] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.705406] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.705437] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.705452] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.707825] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.716754] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.717151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.717380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.717410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.717428] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.717576] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.717782] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.717807] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.717823] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.720002] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.729182] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.729635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.729925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.729970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.729988] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.730166] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.730301] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.730325] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.730341] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.732644] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.741872] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.742291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.742614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.742667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.742685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.742815] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.742950] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.742975] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.742991] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.745392] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.754439] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.754907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.755155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.755185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.755203] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.755369] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.755562] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.755586] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.755602] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.757706] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.767055] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.252 [2024-04-15 02:04:34.767425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.767705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.252 [2024-04-15 02:04:34.767739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.252 [2024-04-15 02:04:34.767773] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.252 [2024-04-15 02:04:34.767921] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.252 [2024-04-15 02:04:34.768103] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.252 [2024-04-15 02:04:34.768128] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.252 [2024-04-15 02:04:34.768144] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.252 [2024-04-15 02:04:34.770517] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.252 [2024-04-15 02:04:34.779959] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.780456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.780777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.780824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.780842] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.781008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.781171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.781197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.781213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.783493] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.792546] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.792979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.793245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.793277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.793295] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.793497] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.793685] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.793710] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.793726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.796144] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.804904] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.805381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.805656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.805685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.805703] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.805887] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.806004] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.806028] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.806044] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.808474] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.817599] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.818054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.818307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.818339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.818357] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.818505] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.818658] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.818683] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.818699] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.820823] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.830259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.830726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.831075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.831101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.831139] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.831307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.831478] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.831503] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.831518] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.833946] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.842752] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.843180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.843426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.843455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.843473] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.843620] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.843754] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.843778] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.843794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.846218] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.855450] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.855881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.856098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.856128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.856146] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.856347] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.856546] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.856571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.856586] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.858959] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.868029] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.868452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.868771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.868819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.868837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.869008] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.869171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.869197] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.869212] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.871443] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.880370] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.880791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.881066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.881126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.881143] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.253 [2024-04-15 02:04:34.881286] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.253 [2024-04-15 02:04:34.881511] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.253 [2024-04-15 02:04:34.881536] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.253 [2024-04-15 02:04:34.881551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.253 [2024-04-15 02:04:34.884057] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.253 [2024-04-15 02:04:34.892831] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.253 [2024-04-15 02:04:34.893279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.893550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.253 [2024-04-15 02:04:34.893580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.253 [2024-04-15 02:04:34.893598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.254 [2024-04-15 02:04:34.893732] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.254 [2024-04-15 02:04:34.893884] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.254 [2024-04-15 02:04:34.893909] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.254 [2024-04-15 02:04:34.893925] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.254 [2024-04-15 02:04:34.896013] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.905417] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.905829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.906085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.906116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.906134] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.906288] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.906477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.906502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.906517] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.908766] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.918107] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.918556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.918835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.918882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.918901] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.919011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.919227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.919253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.919268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.921569] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.930628] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.931138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.931359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.931390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.931409] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.931556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.931745] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.931770] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.931786] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.934057] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.943195] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.943830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.944133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.944164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.944182] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.944331] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.944471] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.944496] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.944513] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.946855] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.955781] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.956200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.956480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.956511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.956529] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.956696] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.956865] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.956891] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.956907] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.959240] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.968359] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.969019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.969321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.969351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.515 [2024-04-15 02:04:34.969369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.515 [2024-04-15 02:04:34.969536] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.515 [2024-04-15 02:04:34.969706] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.515 [2024-04-15 02:04:34.969732] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.515 [2024-04-15 02:04:34.969748] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.515 [2024-04-15 02:04:34.972036] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.515 [2024-04-15 02:04:34.980822] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.515 [2024-04-15 02:04:34.981275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.515 [2024-04-15 02:04:34.981609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:34.981655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:34.981673] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:34.981856] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:34.981990] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:34.982013] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:34.982034] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:34.984443] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:34.993553] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:34.994009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:34.994303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:34.994334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:34.994352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:34.994519] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:34.994671] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:34.994697] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:34.994712] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:34.996891] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.006001] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.006493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.006824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.006850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.006866] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.007104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.007276] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.007301] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.007317] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.009638] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.018576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.019100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.019353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.019383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.019402] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.019550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.019685] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.019709] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.019729] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.021967] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.031158] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.031571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.032090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.032120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.032138] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.032303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.032528] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.032551] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.032567] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.034781] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.043869] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.044324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.044724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.044771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.044789] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.044937] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.045120] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.045146] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.045161] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.047484] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.056378] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.056821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.057072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.057114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.057132] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.057279] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.057450] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.057476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.057492] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.059619] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.068987] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.069388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.069713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.069764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.069782] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.069930] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.070131] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.070157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.070172] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.072423] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.081597] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.082076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.082330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.082361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.082379] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.082546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.082770] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.516 [2024-04-15 02:04:35.082796] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.516 [2024-04-15 02:04:35.082812] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.516 [2024-04-15 02:04:35.085143] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.516 [2024-04-15 02:04:35.094129] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.516 [2024-04-15 02:04:35.094574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.094861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.516 [2024-04-15 02:04:35.094893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.516 [2024-04-15 02:04:35.094912] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.516 [2024-04-15 02:04:35.095129] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.516 [2024-04-15 02:04:35.095319] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.095345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.095361] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.097753] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.517 [2024-04-15 02:04:35.106650] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.517 [2024-04-15 02:04:35.107029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.107325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.107352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.517 [2024-04-15 02:04:35.107369] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.517 [2024-04-15 02:04:35.107559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.517 [2024-04-15 02:04:35.107784] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.107809] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.107824] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.110225] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.517 [2024-04-15 02:04:35.119257] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.517 [2024-04-15 02:04:35.119732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.119991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.120020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.517 [2024-04-15 02:04:35.120038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.517 [2024-04-15 02:04:35.120163] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.517 [2024-04-15 02:04:35.120316] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.120340] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.120356] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.122731] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.517 [2024-04-15 02:04:35.131892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.517 [2024-04-15 02:04:35.132335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.132650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.132696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.517 [2024-04-15 02:04:35.132715] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.517 [2024-04-15 02:04:35.132899] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.517 [2024-04-15 02:04:35.133061] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.133086] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.133102] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.135495] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.517 [2024-04-15 02:04:35.144467] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.517 [2024-04-15 02:04:35.145103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.145388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.145418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.517 [2024-04-15 02:04:35.145436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.517 [2024-04-15 02:04:35.145602] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.517 [2024-04-15 02:04:35.145809] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.145833] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.145849] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.148144] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.517 [2024-04-15 02:04:35.157039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.517 [2024-04-15 02:04:35.157498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.157749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.517 [2024-04-15 02:04:35.157778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.517 [2024-04-15 02:04:35.157796] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.517 [2024-04-15 02:04:35.157981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.517 [2024-04-15 02:04:35.158163] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.517 [2024-04-15 02:04:35.158189] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.517 [2024-04-15 02:04:35.158205] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.517 [2024-04-15 02:04:35.160473] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.169556] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.170121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.170391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.170419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.170437] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.170550] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.170720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.780 [2024-04-15 02:04:35.170744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.780 [2024-04-15 02:04:35.170759] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.780 [2024-04-15 02:04:35.173249] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.182142] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.182587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.183112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.183141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.183162] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.183310] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.183512] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.780 [2024-04-15 02:04:35.183536] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.780 [2024-04-15 02:04:35.183551] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.780 [2024-04-15 02:04:35.185823] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.194884] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.195282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.195637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.195691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.195708] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.195910] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.196054] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.780 [2024-04-15 02:04:35.196090] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.780 [2024-04-15 02:04:35.196121] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.780 [2024-04-15 02:04:35.198389] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.207539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.208129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.208434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.208460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.208476] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.208643] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.208815] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.780 [2024-04-15 02:04:35.208839] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.780 [2024-04-15 02:04:35.208855] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.780 [2024-04-15 02:04:35.211000] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.220162] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.220621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.220873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.220901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.220924] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.221103] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.221292] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.780 [2024-04-15 02:04:35.221312] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.780 [2024-04-15 02:04:35.221325] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.780 [2024-04-15 02:04:35.223526] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.780 [2024-04-15 02:04:35.232684] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.780 [2024-04-15 02:04:35.233137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.233408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.780 [2024-04-15 02:04:35.233438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.780 [2024-04-15 02:04:35.233456] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.780 [2024-04-15 02:04:35.233621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.780 [2024-04-15 02:04:35.233828] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.233853] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.233869] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.236073] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.245323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.245737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.245994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.246021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.246038] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.246244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.246414] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.246440] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.246456] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.248703] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.257892] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.258283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.258531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.258562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.258580] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.258770] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.258905] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.258930] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.258946] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.261151] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.270455] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.270938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.271220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.271252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.271270] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.271456] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.271626] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.271651] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.271667] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.274138] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.283025] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.283523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.283962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.284014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.284032] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.284205] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.284358] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.284381] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.284397] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.286769] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.295673] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.296128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.296381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.296412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.296430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.296616] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.296773] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.296799] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.296815] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.299205] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.308225] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.308683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.309114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.309145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.309163] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.309276] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.309446] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.309469] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.309484] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.311770] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.320659] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.321037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.321326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.321356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.321375] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.321542] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.321676] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.321700] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.321716] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.324213] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.333343] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.333786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.334081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.334109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.334126] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.334327] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.334516] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.334547] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.334564] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.336690] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.345998] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.781 [2024-04-15 02:04:35.346456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.346687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.781 [2024-04-15 02:04:35.346714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.781 [2024-04-15 02:04:35.346745] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.781 [2024-04-15 02:04:35.346922] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.781 [2024-04-15 02:04:35.347125] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.781 [2024-04-15 02:04:35.347151] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.781 [2024-04-15 02:04:35.347167] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.781 [2024-04-15 02:04:35.349506] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.781 [2024-04-15 02:04:35.358590] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.359062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.359336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.359366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.359384] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.359587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.359757] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.359783] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.359798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.362243] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.782 [2024-04-15 02:04:35.371179] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.371629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.371876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.371905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.371923] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.372066] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.372202] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.372225] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.372247] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.374571] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.782 [2024-04-15 02:04:35.383742] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.384239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.384728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.384779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.384797] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.384981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.385181] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.385205] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.385220] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.387434] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.782 [2024-04-15 02:04:35.396220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.396869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.397215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.397247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.397265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.397433] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.397673] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.397698] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.397713] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.400021] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.782 [2024-04-15 02:04:35.408791] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.409235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.409696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.409749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.409767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.409951] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.410112] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.410138] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.410154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.412637] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:49.782 [2024-04-15 02:04:35.421400] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:49.782 [2024-04-15 02:04:35.421838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.422096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:49.782 [2024-04-15 02:04:35.422124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:49.782 [2024-04-15 02:04:35.422141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:49.782 [2024-04-15 02:04:35.422289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:49.782 [2024-04-15 02:04:35.422462] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:49.782 [2024-04-15 02:04:35.422487] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:49.782 [2024-04-15 02:04:35.422503] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:49.782 [2024-04-15 02:04:35.424899] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.434024] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.434597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.434954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.434984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.043 [2024-04-15 02:04:35.435002] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.043 [2024-04-15 02:04:35.435198] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.043 [2024-04-15 02:04:35.435333] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.043 [2024-04-15 02:04:35.435358] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.043 [2024-04-15 02:04:35.435374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.043 [2024-04-15 02:04:35.437715] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.446639] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.447094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.447373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.447403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.043 [2024-04-15 02:04:35.447422] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.043 [2024-04-15 02:04:35.447571] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.043 [2024-04-15 02:04:35.447705] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.043 [2024-04-15 02:04:35.447728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.043 [2024-04-15 02:04:35.447743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.043 [2024-04-15 02:04:35.450273] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.459200] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.459649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.460125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.460155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.043 [2024-04-15 02:04:35.460173] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.043 [2024-04-15 02:04:35.460321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.043 [2024-04-15 02:04:35.460545] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.043 [2024-04-15 02:04:35.460571] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.043 [2024-04-15 02:04:35.460587] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.043 [2024-04-15 02:04:35.462890] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.471699] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.472163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.472384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.472413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.043 [2024-04-15 02:04:35.472431] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.043 [2024-04-15 02:04:35.472597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.043 [2024-04-15 02:04:35.472767] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.043 [2024-04-15 02:04:35.472792] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.043 [2024-04-15 02:04:35.472808] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.043 [2024-04-15 02:04:35.475159] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.484349] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.484746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.485076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.485107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.043 [2024-04-15 02:04:35.485125] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.043 [2024-04-15 02:04:35.485292] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.043 [2024-04-15 02:04:35.485497] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.043 [2024-04-15 02:04:35.485522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.043 [2024-04-15 02:04:35.485539] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.043 [2024-04-15 02:04:35.487699] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.043 [2024-04-15 02:04:35.497259] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.043 [2024-04-15 02:04:35.497691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.497934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.043 [2024-04-15 02:04:35.497963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.497981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.498126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.498261] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.498285] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.498301] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.500642] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.509752] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.510155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.510436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.510466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.510484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.510651] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.510802] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.510828] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.510844] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.513228] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.522385] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.522848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.523122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.523153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.523172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.523320] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.523437] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.523460] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.523475] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.525907] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.534849] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.535294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.535650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.535680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.535696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.535882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.536084] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.536110] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.536126] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.538681] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.547431] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.547847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.548091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.548122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.548141] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.548307] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.548477] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.548502] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.548518] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.550717] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.559927] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.560327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.560632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.560688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.560706] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.560908] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.561109] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.561136] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.561152] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.563442] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.572462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.572865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.573144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.573172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.573193] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.573390] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.573597] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.573622] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.573639] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.576132] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.585067] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.585452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.585862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.585916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.585933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.586132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.586303] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.586327] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.586342] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.588376] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.597725] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.598241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.598468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.598496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.598515] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.598662] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.598832] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.598857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.044 [2024-04-15 02:04:35.598874] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.044 [2024-04-15 02:04:35.601171] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.044 [2024-04-15 02:04:35.610157] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.044 [2024-04-15 02:04:35.610600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.610955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.044 [2024-04-15 02:04:35.610985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.044 [2024-04-15 02:04:35.611003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.044 [2024-04-15 02:04:35.611209] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.044 [2024-04-15 02:04:35.611398] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.044 [2024-04-15 02:04:35.611423] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.611438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.613976] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.622643] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.623095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.623317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.623347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.623364] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.623549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.623719] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.623744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.623760] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.626088] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.635401] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.635819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.636070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.636101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.636119] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.636287] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.636420] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.636446] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.636462] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.638641] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.647801] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.648180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.648434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.648467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.648485] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.648671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.648847] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.648874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.648890] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.651063] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.660520] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.660907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.661184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.661215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.661233] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.661435] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.661623] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.661648] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.661664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.663990] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.673010] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.673455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.673764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.673806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.673822] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.674022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.674221] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.674244] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.674257] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.676478] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.045 [2024-04-15 02:04:35.685597] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.045 [2024-04-15 02:04:35.685957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.686235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.045 [2024-04-15 02:04:35.686262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.045 [2024-04-15 02:04:35.686279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.045 [2024-04-15 02:04:35.686428] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.045 [2024-04-15 02:04:35.686617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.045 [2024-04-15 02:04:35.686647] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.045 [2024-04-15 02:04:35.686664] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.045 [2024-04-15 02:04:35.688843] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.310 [2024-04-15 02:04:35.698115] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.310 [2024-04-15 02:04:35.698413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.698720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.698750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.310 [2024-04-15 02:04:35.698767] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.310 [2024-04-15 02:04:35.698898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.310 [2024-04-15 02:04:35.699110] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.310 [2024-04-15 02:04:35.699132] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.310 [2024-04-15 02:04:35.699145] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.310 [2024-04-15 02:04:35.701425] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.310 [2024-04-15 02:04:35.710465] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.310 [2024-04-15 02:04:35.710894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.711162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.711191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.310 [2024-04-15 02:04:35.711207] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.310 [2024-04-15 02:04:35.711376] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.310 [2024-04-15 02:04:35.711531] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.310 [2024-04-15 02:04:35.711557] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.310 [2024-04-15 02:04:35.711573] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.310 [2024-04-15 02:04:35.713904] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.310 [2024-04-15 02:04:35.723022] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.310 [2024-04-15 02:04:35.723424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.723725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.723773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.310 [2024-04-15 02:04:35.723792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.310 [2024-04-15 02:04:35.723957] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.310 [2024-04-15 02:04:35.724153] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.310 [2024-04-15 02:04:35.724176] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.310 [2024-04-15 02:04:35.724199] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.310 [2024-04-15 02:04:35.726661] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.310 [2024-04-15 02:04:35.735655] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.310 [2024-04-15 02:04:35.736093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.736322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.736352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.310 [2024-04-15 02:04:35.736370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.310 [2024-04-15 02:04:35.736510] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.310 [2024-04-15 02:04:35.736673] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.310 [2024-04-15 02:04:35.736711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.310 [2024-04-15 02:04:35.736726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.310 [2024-04-15 02:04:35.739116] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.310 [2024-04-15 02:04:35.748143] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.310 [2024-04-15 02:04:35.748608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.748967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.310 [2024-04-15 02:04:35.748996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.310 [2024-04-15 02:04:35.749014] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.310 [2024-04-15 02:04:35.749180] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.310 [2024-04-15 02:04:35.749285] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.310 [2024-04-15 02:04:35.749307] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.310 [2024-04-15 02:04:35.749335] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.310 [2024-04-15 02:04:35.751737] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.760750] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.761149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.761365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.761394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.761412] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.761597] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.761712] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.761737] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.761753] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.764199] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.773380] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.773810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.774108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.774136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.774152] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.774285] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.774413] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.774438] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.774454] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.776770] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.785928] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.786354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.786654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.786707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.786726] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.786893] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.787010] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.787035] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.787061] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.789235] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.798290] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.798738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.799028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.799085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.799105] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.799289] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.799424] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.799448] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.799463] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.801758] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.811146] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.811593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.811923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.811950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.811982] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.812152] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.812353] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.812378] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.812393] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.814662] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.823934] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.824401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.824651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.824681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.824700] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.824867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.825001] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.825027] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.825043] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.827321] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.836424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.836835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.837078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.837117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.837136] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.837303] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.837432] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.837457] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.837473] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.839723] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.849164] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.849564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.849812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.849842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.849860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.850009] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.850229] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.850256] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.850272] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.852521] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.861683] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.862107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.862358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.862388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.862407] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.311 [2024-04-15 02:04:35.862556] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.311 [2024-04-15 02:04:35.862762] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.311 [2024-04-15 02:04:35.862787] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.311 [2024-04-15 02:04:35.862803] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.311 [2024-04-15 02:04:35.864857] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.311 [2024-04-15 02:04:35.874274] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.311 [2024-04-15 02:04:35.874664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.875037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.311 [2024-04-15 02:04:35.875098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.311 [2024-04-15 02:04:35.875116] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.875284] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.875454] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.875480] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.875496] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.877726] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.887014] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.887541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.887886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.887938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.887957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.888207] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.888325] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.888349] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.888365] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.890735] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.899506] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.899936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.900182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.900214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.900232] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.900417] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.900570] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.900595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.900611] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.902898] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.912030] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.912504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.912822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.912869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.912887] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.913121] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.913327] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.913351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.913367] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.915760] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.924576] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.925002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.925292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.925323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.925347] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.925514] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.925703] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.925728] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.925743] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.927961] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.937033] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.937497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.937721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.937746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.937762] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.937917] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.938136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.938163] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.938179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.312 [2024-04-15 02:04:35.940395] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.312 [2024-04-15 02:04:35.949712] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.312 [2024-04-15 02:04:35.950130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.950393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.312 [2024-04-15 02:04:35.950423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.312 [2024-04-15 02:04:35.950442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.312 [2024-04-15 02:04:35.950591] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.312 [2024-04-15 02:04:35.950725] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.312 [2024-04-15 02:04:35.950750] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.312 [2024-04-15 02:04:35.950765] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.620 [2024-04-15 02:04:35.953058] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.620 [2024-04-15 02:04:35.961927] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.620 [2024-04-15 02:04:35.962384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.620 [2024-04-15 02:04:35.962631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.620 [2024-04-15 02:04:35.962676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.620 [2024-04-15 02:04:35.962695] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.620 [2024-04-15 02:04:35.962867] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.620 [2024-04-15 02:04:35.962994] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.620 [2024-04-15 02:04:35.963015] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.620 [2024-04-15 02:04:35.963043] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.620 [2024-04-15 02:04:35.965422] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.620 [2024-04-15 02:04:35.974467] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.620 [2024-04-15 02:04:35.974926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.620 [2024-04-15 02:04:35.975227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.620 [2024-04-15 02:04:35.975254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.620 [2024-04-15 02:04:35.975271] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.620 [2024-04-15 02:04:35.975469] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.620 [2024-04-15 02:04:35.975640] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.620 [2024-04-15 02:04:35.975664] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.620 [2024-04-15 02:04:35.975679] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:35.978226] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:35.986992] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:35.987465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:35.987787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:35.987816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:35.987833] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:35.988035] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:35.988190] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:35.988211] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:35.988223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:35.990516] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:35.999653] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.000140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.000407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.000454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.000472] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.000674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.000887] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.000911] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.000927] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.003366] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.012328] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.012823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.013119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.013153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.013172] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.013344] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.013553] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.013577] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.013593] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.015934] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.024820] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.025274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.025735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.025786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.025803] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.026005] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.026225] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.026250] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.026266] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.028515] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.037357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.037773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.038036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.038070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.038087] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.038199] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.038330] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.038360] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.038377] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.040611] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.049849] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.050285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.050589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.050618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.050637] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.050802] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.050991] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.051015] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.051030] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.053430] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.062547] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.062976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.063248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.063278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.063296] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.063444] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.063633] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.063657] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.063673] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.066090] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.075039] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.075715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.076141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.076172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.076190] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.076374] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.076582] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.076606] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.076628] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.079037] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.087581] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.088055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.088353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.088379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.621 [2024-04-15 02:04:36.088395] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.621 [2024-04-15 02:04:36.088584] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.621 [2024-04-15 02:04:36.088720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.621 [2024-04-15 02:04:36.088744] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.621 [2024-04-15 02:04:36.088759] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.621 [2024-04-15 02:04:36.090935] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.621 [2024-04-15 02:04:36.099983] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.621 [2024-04-15 02:04:36.100378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.621 [2024-04-15 02:04:36.100594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.100623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.100642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.100844] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.101015] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.101039] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.101067] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.103359] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.112693] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.113122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.113400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.113425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.113441] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.113596] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.113803] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.113827] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.113843] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.116217] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.125268] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.125639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.126110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.126140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.126157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.126358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.126530] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.126554] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.126569] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.128924] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.137811] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.138271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.138598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.138644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.138662] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.138847] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.139035] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.139078] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.139095] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.141577] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.150462] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.150876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.151165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.151196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.151214] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.151362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.151498] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.151522] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.151537] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.153903] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2283822 Killed "${NVMF_APP[@]}" "$@" 00:29:50.622 02:04:36 -- host/bdevperf.sh@36 -- # tgt_init 00:29:50.622 02:04:36 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:50.622 02:04:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:50.622 02:04:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:29:50.622 02:04:36 -- common/autotest_common.sh@10 -- # set +x 00:29:50.622 02:04:36 -- nvmf/common.sh@469 -- # nvmfpid=2284936 00:29:50.622 02:04:36 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:50.622 02:04:36 -- nvmf/common.sh@470 -- # waitforlisten 2284936 00:29:50.622 02:04:36 -- common/autotest_common.sh@819 -- # '[' -z 2284936 ']' 00:29:50.622 02:04:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:50.622 02:04:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:29:50.622 02:04:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:50.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:50.622 02:04:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:29:50.622 02:04:36 -- common/autotest_common.sh@10 -- # set +x 00:29:50.622 [2024-04-15 02:04:36.163212] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.163635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.163922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.163951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.163969] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.164157] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.164341] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.164362] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.164374] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.166731] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.175855] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.176261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.176694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.176723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.176740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.176959] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.177131] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.177155] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.177170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.179452] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.188544] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.188955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.189208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.189251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.189268] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.622 [2024-04-15 02:04:36.189473] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.622 [2024-04-15 02:04:36.189571] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.622 [2024-04-15 02:04:36.189595] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.622 [2024-04-15 02:04:36.189610] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.622 [2024-04-15 02:04:36.191896] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.622 [2024-04-15 02:04:36.201167] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.622 [2024-04-15 02:04:36.201579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.201843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.622 [2024-04-15 02:04:36.201874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.622 [2024-04-15 02:04:36.201893] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.202104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.202234] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.202255] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.202268] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.623 [2024-04-15 02:04:36.204274] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:29:50.623 [2024-04-15 02:04:36.204347] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:50.623 [2024-04-15 02:04:36.204644] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.623 [2024-04-15 02:04:36.213746] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.623 [2024-04-15 02:04:36.214191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.214445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.214471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-04-15 02:04:36.214487] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.214674] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.214855] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.214880] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.214895] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.623 [2024-04-15 02:04:36.217151] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.623 [2024-04-15 02:04:36.226323] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.623 [2024-04-15 02:04:36.226786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.227106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.227134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-04-15 02:04:36.227151] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.227345] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.227554] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.227579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.227595] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.623 [2024-04-15 02:04:36.229804] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.623 [2024-04-15 02:04:36.239005] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.623 [2024-04-15 02:04:36.239409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.239680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.239710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-04-15 02:04:36.239728] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.239857] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.240060] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.240082] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.240096] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.623 [2024-04-15 02:04:36.242481] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.623 EAL: No free 2048 kB hugepages reported on node 1 00:29:50.623 [2024-04-15 02:04:36.251474] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.623 [2024-04-15 02:04:36.251948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.252203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.252230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-04-15 02:04:36.252247] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.252415] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.252586] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.252610] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.252626] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.623 [2024-04-15 02:04:36.254999] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.623 [2024-04-15 02:04:36.263975] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.623 [2024-04-15 02:04:36.264416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.264772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.623 [2024-04-15 02:04:36.264798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.623 [2024-04-15 02:04:36.264828] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.623 [2024-04-15 02:04:36.265032] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.623 [2024-04-15 02:04:36.265178] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.623 [2024-04-15 02:04:36.265199] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.623 [2024-04-15 02:04:36.265213] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.883 [2024-04-15 02:04:36.267415] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.883 [2024-04-15 02:04:36.276596] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.883 [2024-04-15 02:04:36.277060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.277310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.277336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.883 [2024-04-15 02:04:36.277370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.883 [2024-04-15 02:04:36.277588] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.883 [2024-04-15 02:04:36.277758] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.883 [2024-04-15 02:04:36.277782] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.883 [2024-04-15 02:04:36.277798] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.883 [2024-04-15 02:04:36.280235] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.883 [2024-04-15 02:04:36.280813] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:50.883 [2024-04-15 02:04:36.289196] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.883 [2024-04-15 02:04:36.289798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.290119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.290150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.883 [2024-04-15 02:04:36.290170] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.883 [2024-04-15 02:04:36.290318] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.883 [2024-04-15 02:04:36.290506] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.883 [2024-04-15 02:04:36.290527] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.883 [2024-04-15 02:04:36.290542] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.883 [2024-04-15 02:04:36.292900] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.883 [2024-04-15 02:04:36.301705] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.883 [2024-04-15 02:04:36.302233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.302548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.883 [2024-04-15 02:04:36.302575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.883 [2024-04-15 02:04:36.302592] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.883 [2024-04-15 02:04:36.302746] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.883 [2024-04-15 02:04:36.302935] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.302955] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.302970] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.305703] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.314601] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.314976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.315260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.315288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.315304] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.315466] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.315617] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.315638] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.315651] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.317968] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.327123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.327582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.327806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.327833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.327849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.328088] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.328219] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.328239] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.328253] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.330421] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.339789] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.340418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.340760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.340816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.340837] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.341060] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.341231] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.341253] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.341269] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.343550] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.352483] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.352947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.353195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.353223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.353239] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.353389] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.353564] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.353585] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.353597] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.355915] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.365119] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.365541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.365775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.365800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.365830] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.365986] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.366185] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.366206] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.366220] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.368526] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.372265] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:50.884 [2024-04-15 02:04:36.372385] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:50.884 [2024-04-15 02:04:36.372402] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:50.884 [2024-04-15 02:04:36.372415] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:50.884 [2024-04-15 02:04:36.372528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:29:50.884 [2024-04-15 02:04:36.372585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:29:50.884 [2024-04-15 02:04:36.372588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:50.884 [2024-04-15 02:04:36.377374] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.377882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.378114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.378142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.378159] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.378281] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.378437] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.378458] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.378474] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.380485] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.389748] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.390302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.390584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.390611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.390630] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.390774] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.390931] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.390953] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.390969] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.392913] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.402232] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.402840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.403148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.403178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.403198] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.884 [2024-04-15 02:04:36.403348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.884 [2024-04-15 02:04:36.403524] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.884 [2024-04-15 02:04:36.403545] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.884 [2024-04-15 02:04:36.403562] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.884 [2024-04-15 02:04:36.405801] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.884 [2024-04-15 02:04:36.414740] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.884 [2024-04-15 02:04:36.415304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.415588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.884 [2024-04-15 02:04:36.415614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.884 [2024-04-15 02:04:36.415632] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.415791] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.415978] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.415999] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.416014] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.418442] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.427123] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.427675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.427903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.427931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.427950] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.428124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.428250] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.428272] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.428287] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.430422] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.439505] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.440001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.440266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.440293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.440312] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.440507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.440645] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.440666] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.440681] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.442773] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.452111] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.452608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.452840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.452866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.452883] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.453021] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.453186] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.453208] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.453223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.455281] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.464452] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.464857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.465139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.465167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.465184] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.465334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.465518] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.465539] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.465552] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.467781] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.476713] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.477115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.477349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.477374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.477390] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.477569] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.477702] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.477722] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.477736] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.479630] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.489073] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.489475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.489726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.489752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.489768] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.489916] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.490107] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.490129] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.490142] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.492118] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.501272] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.501647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.501904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.501929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.501945] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.502124] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.502289] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.502310] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.502323] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.504373] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.513623] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.513999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.514235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.514261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.514277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.514425] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.514637] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.514658] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.514671] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:50.885 [2024-04-15 02:04:36.516774] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:50.885 [2024-04-15 02:04:36.526247] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:50.885 [2024-04-15 02:04:36.526626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.526832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.885 [2024-04-15 02:04:36.526862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:50.885 [2024-04-15 02:04:36.526879] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:50.885 [2024-04-15 02:04:36.527027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:50.885 [2024-04-15 02:04:36.527171] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:50.885 [2024-04-15 02:04:36.527193] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:50.885 [2024-04-15 02:04:36.527207] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.146 [2024-04-15 02:04:36.529413] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.146 [2024-04-15 02:04:36.538455] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.146 [2024-04-15 02:04:36.538841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.146 [2024-04-15 02:04:36.539082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.146 [2024-04-15 02:04:36.539111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.146 [2024-04-15 02:04:36.539127] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.146 [2024-04-15 02:04:36.539277] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.146 [2024-04-15 02:04:36.539443] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.146 [2024-04-15 02:04:36.539463] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.146 [2024-04-15 02:04:36.539476] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.146 [2024-04-15 02:04:36.541433] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.146 [2024-04-15 02:04:36.550653] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.146 [2024-04-15 02:04:36.551081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.146 [2024-04-15 02:04:36.551279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.146 [2024-04-15 02:04:36.551306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.146 [2024-04-15 02:04:36.551322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.146 [2024-04-15 02:04:36.551471] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.146 [2024-04-15 02:04:36.551669] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.146 [2024-04-15 02:04:36.551689] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.146 [2024-04-15 02:04:36.551702] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.146 [2024-04-15 02:04:36.553782] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.146 [2024-04-15 02:04:36.562879] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.563296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.563494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.563519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.563540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.563721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.563886] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.563907] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.563920] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.566040] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.575141] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.575530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.575736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.575764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.575780] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.575990] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.576199] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.576221] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.576235] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.578315] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.587449] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.587874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.588080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.588108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.588124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.588271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.588404] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.588425] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.588438] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.590457] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.599609] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.600001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.600255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.600281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.600297] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.600436] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.600587] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.600607] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.600620] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.602781] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.611856] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.612302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.612521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.612546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.612562] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.612710] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.612875] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.612896] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.612910] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.614867] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.624021] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.624451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.624679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.624705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.624720] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.624902] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.625093] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.625115] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.625129] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.627257] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.636206] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.636583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.636801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.636826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.636841] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.637022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.637188] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.637209] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.637223] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.639258] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.648388] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.648765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.648994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.649019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.649034] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.649173] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.649374] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.649394] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.649408] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.651570] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.660613] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.661043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.661274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.661300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.661316] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.147 [2024-04-15 02:04:36.661449] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.147 [2024-04-15 02:04:36.661615] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.147 [2024-04-15 02:04:36.661636] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.147 [2024-04-15 02:04:36.661648] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.147 [2024-04-15 02:04:36.663638] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.147 [2024-04-15 02:04:36.672836] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.147 [2024-04-15 02:04:36.673207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.673431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.147 [2024-04-15 02:04:36.673456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.147 [2024-04-15 02:04:36.673471] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.673587] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.673754] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.673780] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.673794] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.675867] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.685154] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.685572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.685808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.685833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.685849] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.685981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.686158] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.686179] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.686193] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.688272] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.697635] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.698052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.698246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.698272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.698288] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.698437] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.698571] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.698592] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.698605] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.700635] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.709861] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.710240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.710466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.710492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.710507] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.710687] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.710835] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.710855] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.710873] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.712911] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.722131] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.722504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.722693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.722718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.722734] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.722898] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.723072] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.723094] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.723107] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.725171] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.734488] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.734888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.735114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.735141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.735157] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.735290] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.735456] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.735476] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.735489] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.737572] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.746761] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.747159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.747387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.747412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.747427] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.747543] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.747691] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.747711] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.747724] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.749681] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.759081] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.759494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.759744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.759769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.759784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.759962] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.760136] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.760157] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.760170] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.762147] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.771357] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.771738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.771962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.771987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.772003] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.772126] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.772296] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.772317] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.772345] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.148 [2024-04-15 02:04:36.774284] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.148 [2024-04-15 02:04:36.783801] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.148 [2024-04-15 02:04:36.784162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.784391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.148 [2024-04-15 02:04:36.784417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.148 [2024-04-15 02:04:36.784433] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.148 [2024-04-15 02:04:36.784549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.148 [2024-04-15 02:04:36.784713] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.148 [2024-04-15 02:04:36.784733] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.148 [2024-04-15 02:04:36.784746] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.149 [2024-04-15 02:04:36.786716] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.409 [2024-04-15 02:04:36.796203] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.409 [2024-04-15 02:04:36.796643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.796879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.796904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.409 [2024-04-15 02:04:36.796920] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.409 [2024-04-15 02:04:36.797109] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.409 [2024-04-15 02:04:36.797227] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.409 [2024-04-15 02:04:36.797247] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.409 [2024-04-15 02:04:36.797260] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.409 [2024-04-15 02:04:36.799328] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.409 [2024-04-15 02:04:36.808426] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.409 [2024-04-15 02:04:36.808819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.809071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.809097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.409 [2024-04-15 02:04:36.809113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.409 [2024-04-15 02:04:36.809274] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.409 [2024-04-15 02:04:36.809407] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.409 [2024-04-15 02:04:36.809427] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.409 [2024-04-15 02:04:36.809440] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.409 [2024-04-15 02:04:36.811565] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.409 [2024-04-15 02:04:36.820917] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.409 [2024-04-15 02:04:36.821276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.821502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.409 [2024-04-15 02:04:36.821527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.409 [2024-04-15 02:04:36.821542] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.821706] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.821854] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.821874] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.821888] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.823927] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.833220] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.833573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.833823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.833848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.833864] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.834010] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.834172] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.834195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.834208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.836246] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.845539] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.845933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.846161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.846187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.846202] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.846334] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.846532] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.846553] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.846566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.848633] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.857812] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.858227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.858439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.858465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.858480] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.858645] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.858811] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.858832] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.858845] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.860839] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.870222] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.870578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.870806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.870837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.870853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.871057] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.871174] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.871195] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.871208] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.873269] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.882593] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.882997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.883235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.883262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.883277] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.883443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.883670] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.883691] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.883703] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.885667] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.894838] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.895273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.895465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.895491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.895506] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.895670] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.895836] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.895857] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.895870] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.897874] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.907128] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.907497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.907695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.907721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.907741] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.907890] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.908067] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.908092] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.908106] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.910313] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.919571] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.920015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.920217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.920243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.920259] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.920423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.920558] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.920579] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.920592] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.410 [2024-04-15 02:04:36.922679] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.410 [2024-04-15 02:04:36.931973] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.410 [2024-04-15 02:04:36.932391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.932608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.410 [2024-04-15 02:04:36.932634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.410 [2024-04-15 02:04:36.932650] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.410 [2024-04-15 02:04:36.932797] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.410 [2024-04-15 02:04:36.932947] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.410 [2024-04-15 02:04:36.932967] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.410 [2024-04-15 02:04:36.932980] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.935001] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:36.944291] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:36.944639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.944898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.944923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:36.944938] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:36.945087] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:36.945273] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:36.945295] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:36.945308] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.947473] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:36.956621] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:36.956999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.957236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.957263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:36.957279] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:36.957443] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:36.957641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:36.957661] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:36.957674] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.959750] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:36.968953] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:36.969392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.969615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.969640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:36.969655] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:36.969803] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:36.969969] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:36.969990] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:36.970003] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.972306] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:36.981159] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:36.981522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.981716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.981741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:36.981756] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:36.981906] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:36.982118] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:36.982140] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:36.982154] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.984264] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:36.993585] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:36.993945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.994162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:36.994190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:36.994206] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:36.994338] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:36.994538] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:36.994559] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:36.994572] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:36.996521] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:37.005751] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:37.006111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.006339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.006364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:37.006380] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:37.006559] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:37.006692] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:37.006712] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:37.006726] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:37.008880] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:37.017997] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:37.018440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.018677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.018702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:37.018717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:37.018882] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:37.019057] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:37.019084] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:37.019099] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:37.021416] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:37.030246] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:37.030678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.030897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.030922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:37.030937] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:37.031079] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:37.031215] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:37.031236] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:37.031249] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:37.033373] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:37.042559] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.411 [2024-04-15 02:04:37.042995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.043223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.411 [2024-04-15 02:04:37.043250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.411 [2024-04-15 02:04:37.043265] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.411 [2024-04-15 02:04:37.043462] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.411 [2024-04-15 02:04:37.043641] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.411 [2024-04-15 02:04:37.043662] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.411 [2024-04-15 02:04:37.043676] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.411 [2024-04-15 02:04:37.045682] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.411 [2024-04-15 02:04:37.054901] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.672 [2024-04-15 02:04:37.055259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.055495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.055524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.672 [2024-04-15 02:04:37.055540] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.672 [2024-04-15 02:04:37.055721] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.672 [2024-04-15 02:04:37.055825] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.672 [2024-04-15 02:04:37.055846] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.672 [2024-04-15 02:04:37.055865] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.672 [2024-04-15 02:04:37.057940] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.672 [2024-04-15 02:04:37.067188] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.672 [2024-04-15 02:04:37.067568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.067765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.067790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.672 [2024-04-15 02:04:37.067806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.672 [2024-04-15 02:04:37.067954] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.672 [2024-04-15 02:04:37.068144] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.672 [2024-04-15 02:04:37.068165] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.672 [2024-04-15 02:04:37.068179] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.672 [2024-04-15 02:04:37.070442] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.672 [2024-04-15 02:04:37.079480] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.672 [2024-04-15 02:04:37.079909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.080113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.080139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.672 [2024-04-15 02:04:37.080155] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.672 [2024-04-15 02:04:37.080271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.672 [2024-04-15 02:04:37.080421] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.672 [2024-04-15 02:04:37.080442] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.672 [2024-04-15 02:04:37.080455] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.672 [2024-04-15 02:04:37.082664] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.672 [2024-04-15 02:04:37.091657] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.672 [2024-04-15 02:04:37.092069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.092316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.672 [2024-04-15 02:04:37.092342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.672 [2024-04-15 02:04:37.092358] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.672 [2024-04-15 02:04:37.092508] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.672 [2024-04-15 02:04:37.092642] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.672 [2024-04-15 02:04:37.092663] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.672 [2024-04-15 02:04:37.092677] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.672 [2024-04-15 02:04:37.094869] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.104068] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.104476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.104699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.104725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.104740] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.104889] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.105065] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.105087] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.105100] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.107174] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.116424] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.116775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.116986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.117020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.117036] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.117192] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.117330] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.117351] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.117378] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.119430] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.128708] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.129076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.129311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.129337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.129352] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.129485] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.129622] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.129643] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.129657] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.131911] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 02:04:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:29:51.673 02:04:37 -- common/autotest_common.sh@852 -- # return 0 00:29:51.673 02:04:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:51.673 02:04:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:29:51.673 [2024-04-15 02:04:37.140948] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.673 [2024-04-15 02:04:37.141362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.141557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.141582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.141598] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.141745] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.141894] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.141915] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.141928] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.144078] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.153529] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.153931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.154159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.154186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.154201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.154381] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.154530] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.154552] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.154566] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.156613] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 02:04:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:51.673 02:04:37 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:51.673 02:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.673 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.673 [2024-04-15 02:04:37.165766] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.166147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.166375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.166400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.166416] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.166564] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.166720] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.166741] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.166755] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.168135] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:51.673 [2024-04-15 02:04:37.168889] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 02:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.673 02:04:37 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:51.673 02:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.673 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.673 [2024-04-15 02:04:37.178153] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.178560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.178818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.178844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.178859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.179006] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.179197] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.179220] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.179233] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.181259] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.190367] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.190839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.191067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.191104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.673 [2024-04-15 02:04:37.191121] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.673 [2024-04-15 02:04:37.191304] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.673 [2024-04-15 02:04:37.191423] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.673 [2024-04-15 02:04:37.191444] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.673 [2024-04-15 02:04:37.191458] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.673 [2024-04-15 02:04:37.193668] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.673 [2024-04-15 02:04:37.202881] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.673 [2024-04-15 02:04:37.203540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-04-15 02:04:37.203803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-04-15 02:04:37.203833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.674 [2024-04-15 02:04:37.203853] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.674 [2024-04-15 02:04:37.204029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.674 [2024-04-15 02:04:37.204208] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.674 [2024-04-15 02:04:37.204231] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.674 [2024-04-15 02:04:37.204248] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.674 Malloc0 00:29:51.674 [2024-04-15 02:04:37.206299] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.674 02:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.674 02:04:37 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:51.674 02:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.674 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.674 02:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.674 02:04:37 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:51.674 02:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.674 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.674 [2024-04-15 02:04:37.215311] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.674 [2024-04-15 02:04:37.215698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-04-15 02:04:37.215939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-04-15 02:04:37.215965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e30030 with addr=10.0.0.2, port=4420 00:29:51.674 [2024-04-15 02:04:37.215981] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e30030 is same with the state(5) to be set 00:29:51.674 [2024-04-15 02:04:37.216155] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e30030 (9): Bad file descriptor 00:29:51.674 [2024-04-15 02:04:37.216309] nvme_ctrlr.c:4027:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:51.674 [2024-04-15 02:04:37.216345] nvme_ctrlr.c:1736:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:51.674 [2024-04-15 02:04:37.216359] nvme_ctrlr.c:1028:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:51.674 [2024-04-15 02:04:37.218570] bdev_nvme.c:2024:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:51.674 02:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.674 02:04:37 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:51.674 02:04:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:29:51.674 02:04:37 -- common/autotest_common.sh@10 -- # set +x 00:29:51.674 [2024-04-15 02:04:37.226131] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:51.674 [2024-04-15 02:04:37.227738] nvme_ctrlr.c:1638:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:51.674 02:04:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:29:51.674 02:04:37 -- host/bdevperf.sh@38 -- # wait 2284246 00:29:51.674 [2024-04-15 02:04:37.300841] bdev_nvme.c:2026:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:01.647 00:30:01.647 Latency(us) 00:30:01.647 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:01.647 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:01.647 Verification LBA range: start 0x0 length 0x4000 00:30:01.647 Nvme1n1 : 15.01 9720.64 37.97 15442.30 0.00 5072.24 813.13 20680.25 00:30:01.647 =================================================================================================================== 00:30:01.647 Total : 9720.64 37.97 15442.30 0.00 5072.24 813.13 20680.25 00:30:01.647 02:04:45 -- host/bdevperf.sh@39 -- # sync 00:30:01.648 02:04:45 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:01.648 02:04:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:01.648 02:04:45 -- common/autotest_common.sh@10 -- # set +x 00:30:01.648 02:04:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:01.648 02:04:45 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:01.648 02:04:45 -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:01.648 02:04:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:01.648 02:04:45 -- nvmf/common.sh@116 -- # sync 00:30:01.648 02:04:45 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:01.648 02:04:45 -- nvmf/common.sh@119 -- # set +e 00:30:01.648 02:04:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:01.648 02:04:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:01.648 rmmod nvme_tcp 00:30:01.648 rmmod nvme_fabrics 00:30:01.648 rmmod nvme_keyring 00:30:01.648 02:04:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:01.648 02:04:45 -- nvmf/common.sh@123 -- # set -e 00:30:01.648 02:04:45 -- nvmf/common.sh@124 -- # return 0 00:30:01.648 02:04:45 -- nvmf/common.sh@477 -- # '[' -n 2284936 ']' 00:30:01.648 02:04:45 -- nvmf/common.sh@478 -- # killprocess 2284936 00:30:01.648 02:04:45 -- common/autotest_common.sh@926 -- # '[' -z 2284936 ']' 00:30:01.648 02:04:45 -- common/autotest_common.sh@930 -- # kill -0 2284936 00:30:01.648 02:04:45 -- common/autotest_common.sh@931 -- # uname 00:30:01.648 02:04:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:01.648 02:04:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2284936 00:30:01.648 02:04:45 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:30:01.648 02:04:45 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:30:01.648 02:04:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2284936' 00:30:01.648 killing process with pid 2284936 00:30:01.648 02:04:45 -- common/autotest_common.sh@945 -- # kill 2284936 00:30:01.648 02:04:45 -- common/autotest_common.sh@950 -- # wait 2284936 00:30:01.648 02:04:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:01.648 02:04:46 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:01.648 02:04:46 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:01.648 02:04:46 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:01.648 02:04:46 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:01.648 02:04:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.648 02:04:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:01.648 02:04:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.580 02:04:48 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:02.580 00:30:02.580 real 0m22.869s 00:30:02.580 user 1m1.591s 00:30:02.580 sys 0m4.517s 00:30:02.580 02:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.580 02:04:48 -- common/autotest_common.sh@10 -- # set +x 00:30:02.580 ************************************ 00:30:02.580 END TEST nvmf_bdevperf 00:30:02.580 ************************************ 00:30:02.580 02:04:48 -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:02.580 02:04:48 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:02.580 02:04:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:02.580 02:04:48 -- common/autotest_common.sh@10 -- # set +x 00:30:02.580 ************************************ 00:30:02.580 START TEST nvmf_target_disconnect 00:30:02.580 ************************************ 00:30:02.580 02:04:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:02.840 * Looking for test storage... 00:30:02.840 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.840 02:04:48 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.840 02:04:48 -- nvmf/common.sh@7 -- # uname -s 00:30:02.840 02:04:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.840 02:04:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.840 02:04:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.840 02:04:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.840 02:04:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.840 02:04:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.840 02:04:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.840 02:04:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.840 02:04:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.840 02:04:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.840 02:04:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.840 02:04:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.840 02:04:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.840 02:04:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.840 02:04:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.840 02:04:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.840 02:04:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.840 02:04:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.840 02:04:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.840 02:04:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.840 02:04:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.840 02:04:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.840 02:04:48 -- paths/export.sh@5 -- # export PATH 00:30:02.840 02:04:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.840 02:04:48 -- nvmf/common.sh@46 -- # : 0 00:30:02.840 02:04:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:02.840 02:04:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:02.840 02:04:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:02.840 02:04:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.840 02:04:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.840 02:04:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:02.840 02:04:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:02.840 02:04:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:02.840 02:04:48 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:02.840 02:04:48 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:02.840 02:04:48 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:02.840 02:04:48 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:30:02.840 02:04:48 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:02.840 02:04:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.840 02:04:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:02.840 02:04:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:02.840 02:04:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:02.840 02:04:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.840 02:04:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.840 02:04:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.840 02:04:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:02.840 02:04:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:02.840 02:04:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:02.840 02:04:48 -- common/autotest_common.sh@10 -- # set +x 00:30:04.746 02:04:50 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:04.747 02:04:50 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:04.747 02:04:50 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:04.747 02:04:50 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:04.747 02:04:50 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:04.747 02:04:50 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:04.747 02:04:50 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:04.747 02:04:50 -- nvmf/common.sh@294 -- # net_devs=() 00:30:04.747 02:04:50 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:04.747 02:04:50 -- nvmf/common.sh@295 -- # e810=() 00:30:04.747 02:04:50 -- nvmf/common.sh@295 -- # local -ga e810 00:30:04.747 02:04:50 -- nvmf/common.sh@296 -- # x722=() 00:30:04.747 02:04:50 -- nvmf/common.sh@296 -- # local -ga x722 00:30:04.747 02:04:50 -- nvmf/common.sh@297 -- # mlx=() 00:30:04.747 02:04:50 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:04.747 02:04:50 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.747 02:04:50 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.747 02:04:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:04.747 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:04.747 02:04:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:04.747 02:04:50 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:04.747 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:04.747 02:04:50 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.747 02:04:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.747 02:04:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.747 02:04:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:04.747 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:04.747 02:04:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:04.747 02:04:50 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.747 02:04:50 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.747 02:04:50 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:04.747 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:04.747 02:04:50 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:04.747 02:04:50 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:04.747 02:04:50 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.747 02:04:50 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.747 02:04:50 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:04.747 02:04:50 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.747 02:04:50 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.747 02:04:50 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:04.747 02:04:50 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.747 02:04:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.747 02:04:50 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:04.747 02:04:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:04.747 02:04:50 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.747 02:04:50 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.747 02:04:50 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.747 02:04:50 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.747 02:04:50 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:04.747 02:04:50 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.747 02:04:50 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.747 02:04:50 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.747 02:04:50 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:04.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:30:04.747 00:30:04.747 --- 10.0.0.2 ping statistics --- 00:30:04.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.747 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:30:04.747 02:04:50 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:04.747 00:30:04.747 --- 10.0.0.1 ping statistics --- 00:30:04.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.747 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:04.747 02:04:50 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.747 02:04:50 -- nvmf/common.sh@410 -- # return 0 00:30:04.747 02:04:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:04.747 02:04:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.747 02:04:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:04.747 02:04:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.747 02:04:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:04.747 02:04:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:04.747 02:04:50 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:04.747 02:04:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:04.747 02:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:04.747 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:30:04.747 ************************************ 00:30:04.747 START TEST nvmf_target_disconnect_tc1 00:30:04.747 ************************************ 00:30:04.747 02:04:50 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc1 00:30:04.747 02:04:50 -- host/target_disconnect.sh@32 -- # set +e 00:30:04.747 02:04:50 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:04.747 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.747 [2024-04-15 02:04:50.338988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-04-15 02:04:50.339306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:04.747 [2024-04-15 02:04:50.339340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f3f510 with addr=10.0.0.2, port=4420 00:30:04.747 [2024-04-15 02:04:50.339373] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:04.747 [2024-04-15 02:04:50.339395] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:04.747 [2024-04-15 02:04:50.339409] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:04.747 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:04.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:04.747 Initializing NVMe Controllers 00:30:04.747 02:04:50 -- host/target_disconnect.sh@33 -- # trap - ERR 00:30:04.747 02:04:50 -- host/target_disconnect.sh@33 -- # print_backtrace 00:30:04.747 02:04:50 -- common/autotest_common.sh@1132 -- # [[ hxBET =~ e ]] 00:30:04.747 02:04:50 -- common/autotest_common.sh@1132 -- # return 0 00:30:04.747 02:04:50 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:30:04.747 02:04:50 -- host/target_disconnect.sh@41 -- # set -e 00:30:04.747 00:30:04.747 real 0m0.100s 00:30:04.747 user 0m0.037s 00:30:04.747 sys 0m0.062s 00:30:04.747 02:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:04.747 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:30:04.747 ************************************ 00:30:04.747 END TEST nvmf_target_disconnect_tc1 00:30:04.747 ************************************ 00:30:04.747 02:04:50 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:04.747 02:04:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:04.747 02:04:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:04.747 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:30:04.747 ************************************ 00:30:04.747 START TEST nvmf_target_disconnect_tc2 00:30:04.747 ************************************ 00:30:04.747 02:04:50 -- common/autotest_common.sh@1104 -- # nvmf_target_disconnect_tc2 00:30:04.747 02:04:50 -- host/target_disconnect.sh@45 -- # disconnect_init 10.0.0.2 00:30:04.747 02:04:50 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:04.748 02:04:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:04.748 02:04:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:04.748 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:30:04.748 02:04:50 -- nvmf/common.sh@469 -- # nvmfpid=2288005 00:30:04.748 02:04:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:04.748 02:04:50 -- nvmf/common.sh@470 -- # waitforlisten 2288005 00:30:04.748 02:04:50 -- common/autotest_common.sh@819 -- # '[' -z 2288005 ']' 00:30:04.748 02:04:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.748 02:04:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:04.748 02:04:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.748 02:04:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:04.748 02:04:50 -- common/autotest_common.sh@10 -- # set +x 00:30:05.007 [2024-04-15 02:04:50.431449] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:05.007 [2024-04-15 02:04:50.431535] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:05.007 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.007 [2024-04-15 02:04:50.500756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:05.007 [2024-04-15 02:04:50.587305] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:05.007 [2024-04-15 02:04:50.587449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:05.007 [2024-04-15 02:04:50.587466] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:05.007 [2024-04-15 02:04:50.587482] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:05.007 [2024-04-15 02:04:50.587572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:05.007 [2024-04-15 02:04:50.587646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:05.007 [2024-04-15 02:04:50.587714] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:05.007 [2024-04-15 02:04:50.587717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:05.941 02:04:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:05.941 02:04:51 -- common/autotest_common.sh@852 -- # return 0 00:30:05.941 02:04:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:05.941 02:04:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 02:04:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:05.941 02:04:51 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 Malloc0 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 [2024-04-15 02:04:51.408594] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 [2024-04-15 02:04:51.436843] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:05.941 02:04:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:05.941 02:04:51 -- common/autotest_common.sh@10 -- # set +x 00:30:05.941 02:04:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:05.941 02:04:51 -- host/target_disconnect.sh@50 -- # reconnectpid=2288164 00:30:05.941 02:04:51 -- host/target_disconnect.sh@52 -- # sleep 2 00:30:05.941 02:04:51 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:05.941 EAL: No free 2048 kB hugepages reported on node 1 00:30:07.846 02:04:53 -- host/target_disconnect.sh@53 -- # kill -9 2288005 00:30:07.847 02:04:53 -- host/target_disconnect.sh@55 -- # sleep 2 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 [2024-04-15 02:04:53.463686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 [2024-04-15 02:04:53.464038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Read completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 Write completed with error (sct=0, sc=8) 00:30:07.847 starting I/O failed 00:30:07.847 [2024-04-15 02:04:53.464381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:07.847 [2024-04-15 02:04:53.464711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.464954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.464985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.847 qpair failed and we were unable to recover it. 00:30:07.847 [2024-04-15 02:04:53.465234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.465442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.465469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.847 qpair failed and we were unable to recover it. 00:30:07.847 [2024-04-15 02:04:53.465673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.465914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.465943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.847 qpair failed and we were unable to recover it. 00:30:07.847 [2024-04-15 02:04:53.466192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.847 [2024-04-15 02:04:53.466391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.466432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.466646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.466877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.466903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.467240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.467471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.467496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.467793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.468101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.468127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.468327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.468564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.468606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.469003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.469293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.469321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.469623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.469920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.469947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.470190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.470407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.470446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.470794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.471068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.471105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.471351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.471642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.471671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.472018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.472305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.472332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.472553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.472853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.472883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.473143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.473375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.473402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.473702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.473945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.473975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.474228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.474449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.474474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.474736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.475014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.475043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.475298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.475558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.475598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.475863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.476238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.476263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.476521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.476768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.476798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.477057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.477277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.477303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.477569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.477903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.477959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.478253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.478684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.478736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.479041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.479289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.479340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.479908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.480215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.480247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.480559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.480835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.480863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.481101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.481302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.481356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.481555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.481799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.481825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.482067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.482345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.482409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.848 qpair failed and we were unable to recover it. 00:30:07.848 [2024-04-15 02:04:53.482631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.848 [2024-04-15 02:04:53.482882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.482908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.483134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.483379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.483410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.483687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.483929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.483970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.484193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.484431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.484457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.484698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.485099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.485126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.485338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.485607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.485641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.485890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.486173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.486199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.486457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.486684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.486711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.486913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.487174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.487205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.487502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.487818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.487844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.488100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.488319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.488365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.488608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.488882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.488912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.489195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.489442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.489486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.489724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.489939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.489966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.490262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.490525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.490552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.490830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.491088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.491126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.491403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.491859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.491911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.492163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.492406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:07.849 [2024-04-15 02:04:53.492447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:07.849 qpair failed and we were unable to recover it. 00:30:07.849 [2024-04-15 02:04:53.492704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.493035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.493085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.493368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.493654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.493681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.493942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.494154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.494183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.494439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.494766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.494791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.495070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.495363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.495389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.495831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.496138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.496168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.496378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.496771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.496822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.497138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.497415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.497446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.497696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.497932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.497959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.498180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.498498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.498526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.498819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.499098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.499129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.499377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.499622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.499649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.499903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.500176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.500203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.500483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.500741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.500784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.501035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.501301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.501335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.501608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.501903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.501932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.502190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.502436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.502467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.502718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.503087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.503118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.503346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.503618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.503648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.503897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.504153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.504184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.504429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.504668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.504697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.504969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.505248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.115 [2024-04-15 02:04:53.505277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.115 qpair failed and we were unable to recover it. 00:30:08.115 [2024-04-15 02:04:53.505529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.505750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.505776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.506073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.506354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.506382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.506630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.506920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.506944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.507216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.507474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.507503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.507779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.507992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.508023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.508315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.508571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.508612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.508940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.509170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.509201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.509420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.509666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.509695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.509980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.510253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.510283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.510558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.510799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.510840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.511098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.511368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.511393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.511653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.512085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.512135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.512437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.512864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.512915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.513171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.513454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.513479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.513753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.514030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.514077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.514398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.514828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.514876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.515157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.515541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.515600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.515887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.516190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.516216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.516475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.516699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.516724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.517078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.517328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.517360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.517606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.517870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.517895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.518154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.518439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.518468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.518740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.519163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.519193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.519443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.519697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.519721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.519963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.520190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.520216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.520442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.520698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.520726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.521000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.521229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.521256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.521496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.521738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.521779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.116 [2024-04-15 02:04:53.522071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.522319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.116 [2024-04-15 02:04:53.522347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.116 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.522619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.523010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.523075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.523315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.523605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.523630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.524012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.524248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.524279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.524543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.524855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.524883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.525226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.525478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.525507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.525777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.526057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.526086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.526366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.526798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.526860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.527143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.527369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.527396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.527682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.528114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.528143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.528395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.528608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.528639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.528883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.529116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.529143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.529378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.529574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.529599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.529938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.530222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.530249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.530508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.530786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.530855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.531130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.531391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.531432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.531722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.532081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.532134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.532413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.532839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.532885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.533145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.533393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.533422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.533661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.533916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.533944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.534368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.534708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.534736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.534997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.535274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.535304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.535582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.536105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.536135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.536357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.536623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.536653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.536918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.537191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.537221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.537482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.537717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.537749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.538082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.538376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.538400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.538637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.538908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.538937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.539190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.539436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.539464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.117 qpair failed and we were unable to recover it. 00:30:08.117 [2024-04-15 02:04:53.539750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.117 [2024-04-15 02:04:53.539998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.540026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.540318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.540823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.540870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.541143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.541388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.541428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.541680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.541970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.542000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.542276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.542576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.542605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.542853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.543127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.543164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.543451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.543739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.543768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.544150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.544397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.544426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.544670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.545038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.545128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.545377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.545629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.545656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.545872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.546144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.546171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.546429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.546878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.546926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.547245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.547524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.547591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.547833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.548077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.548113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.548359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.548829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.548884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.549211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.549468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.549497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.549711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.549960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.549990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.550254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.550630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.550678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.550915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.551170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.551200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.551425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.551680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.551710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.551963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.552168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.552196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.552399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.552654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.552683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.552958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.553251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.553280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.553552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.553853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.553879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.554138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.554358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.554387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.554608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.554817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.554846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.555090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.555347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.555379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.555640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.555891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.555922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.556203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.556445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.118 [2024-04-15 02:04:53.556486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.118 qpair failed and we were unable to recover it. 00:30:08.118 [2024-04-15 02:04:53.556709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.556945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.556970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.557263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.557583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.557609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.557905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.558153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.558182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.558409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.558651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.558680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.558923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.559148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.559175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.559452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.559816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.559875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.560125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.560325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.560366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.560699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.560951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.560981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.561195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.561462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.561492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.561739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.561997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.562028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.562349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.562781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.562837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.563102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.563354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.563386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.563630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.563930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.563961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.564280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.564745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.564797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.565068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.565315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.565344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.565590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.565812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.565841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.566090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.566298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.566324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.566575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.566795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.566826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.567099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.567326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.567353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.567623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.567844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.567869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.568127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.568385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.568420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.119 qpair failed and we were unable to recover it. 00:30:08.119 [2024-04-15 02:04:53.568636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.119 [2024-04-15 02:04:53.568944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.568974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.569246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.569462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.569491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.569759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.570000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.570029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.570291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.570740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.570801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.571056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.571303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.571352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.571601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.571811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.571840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.572106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.572432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.572462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.572688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.572945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.572971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.573227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.573474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.573505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.573732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.573995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.574029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.574431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.574956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.575009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.575306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.575803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.575854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.576131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.576386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.576415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.576661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.576882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.576911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.577132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.577383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.577412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.577634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.577859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.577889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.578132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.578518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.578583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.578852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.579070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.579100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.579326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.579785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.579835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.580095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.580332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.580368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.580605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.580837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.580864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.581114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.581352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.581378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.581608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.581851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.581878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.582165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.582410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.582436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.582684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.582957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.582987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.583243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.583483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.583509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.583734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.583987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.584018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.584317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.584828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.584880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.585102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.585331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.120 [2024-04-15 02:04:53.585358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.120 qpair failed and we were unable to recover it. 00:30:08.120 [2024-04-15 02:04:53.585618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.585915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.585951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.586201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.586628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.586660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.586907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.587138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.587165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.587458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.587813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.587862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.588136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.588530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.588583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.588898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.589165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.589192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.589488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.589950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.590011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.590299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.590608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.590633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.590873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.591126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.591156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.591615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.592139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.592172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.592439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.592723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.592749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.593093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.593410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.593442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.593697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.593957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.593986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.594367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.594913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.594969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.595272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.595651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.595709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.595989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.596222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.596249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.596516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.596763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.596793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.597017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.597264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.597289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.597506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.597728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.597756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.597999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.598261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.598290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.598582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.599074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.599136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.599411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.599647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.599688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.599906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.600188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.600215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.600463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.600880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.600941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.601163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.601419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.601447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.601694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.601923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.601948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.602221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.602438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.602464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.602947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.603248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.603277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.603528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.603807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.121 [2024-04-15 02:04:53.603831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.121 qpair failed and we were unable to recover it. 00:30:08.121 [2024-04-15 02:04:53.604072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.604278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.604304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.604750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.605021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.605058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.605286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.605512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.605543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.605833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.606090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.606121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.606341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.606592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.606633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.606863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.607090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.607133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.607463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.607728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.607756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.608024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.608313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.608353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.608647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.609137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.609167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.609412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.609644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.609668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.609926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.610171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.610201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.610450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.610952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.611004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.611232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.611487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.611516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.611786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.612070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.612101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.612354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.612597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.612625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.612901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.613215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.613244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.613467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.613736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.613764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.613988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.614271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.614297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.614513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.614804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.614832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.615120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.615347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.615373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.615650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.616095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.616120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.616409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.616918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.616967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.617228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.617439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.617464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.617702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.617973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.618001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.618286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.618533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.618558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.618861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.619134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.619163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.619436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.619877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.619927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.620243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.620468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.620493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.122 qpair failed and we were unable to recover it. 00:30:08.122 [2024-04-15 02:04:53.620696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.620967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.122 [2024-04-15 02:04:53.620998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.621255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.621506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.621536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.621898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.622209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.622236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.622502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.622702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.622726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.623020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.623408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.623434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.623731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.623953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.623978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.624303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.624675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.624728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.625008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.625286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.625312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.625586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.626022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.626079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.626331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.626685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.626709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.627133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.627381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.627410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.627653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.627894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.627935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.628315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.628630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.628669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.628914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.629164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.629207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.629580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.629987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.630025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.630286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.630564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.630593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.630828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.631075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.631104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.631369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.631600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.631641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.631921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.632202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.632231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.632480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.632977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.633026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.633303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.633715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.633765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.633987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.634249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.634274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.634535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.635037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.635091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.635343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.635793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.635845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.636103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.636347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.636374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.636654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.636925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.636952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.637240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.637523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.637550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.637761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.637988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.638013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.638261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.638505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.638534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.123 qpair failed and we were unable to recover it. 00:30:08.123 [2024-04-15 02:04:53.638842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.123 [2024-04-15 02:04:53.639128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.639158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.639400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.639648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.639677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.639914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.640140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.640170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.640418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.640710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.640740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.640986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.641264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.641291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.641538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.641885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.641912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.642155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.642435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.642462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.642789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.643035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.643072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.643325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.643563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.643593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.643993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.644316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.644344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.644587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.644805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.644832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.645104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.645337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.645363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.645580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.645948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.645976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.646224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.646476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.646505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.646776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.647024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.647059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.647290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.647498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.647523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.647790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.648028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.648063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.648359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.648633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.648662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.648987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.649255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.649282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.649486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.649697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.649728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.650006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.650235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.650266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.650513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.650952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.651005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.651282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.651570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.651596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.651869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.652107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.652139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.124 qpair failed and we were unable to recover it. 00:30:08.124 [2024-04-15 02:04:53.652372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.652665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.124 [2024-04-15 02:04:53.652706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.652970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.653187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.653218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.653489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.653770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.653796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.654036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.654260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.654290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.654536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.654852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.654883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.655142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.655385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.655411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.655709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.656112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.656143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.656398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.656638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.656664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.656935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.657147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.657177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.657432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.657730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.657759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.657998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.658280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.658309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.658574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.658908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.658934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.659224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.659561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.659621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.659890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.660122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.660152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.660423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.660714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.660779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.661102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.661400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.661429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.661697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.662150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.662179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.662429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.662681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.662709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.662934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.663184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.663228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.663496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.663787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.663812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.664111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.664397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.664423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.664673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.665126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.665163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.665406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.665647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.665677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.665927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.666173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.666202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.666435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.666796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.666828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.667107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.667493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.667553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.667825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.668069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.668108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.668334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.668738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.668770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.669041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.669323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.669351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.125 qpair failed and we were unable to recover it. 00:30:08.125 [2024-04-15 02:04:53.669617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.125 [2024-04-15 02:04:53.669859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.669883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.670152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.670401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.670443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.670692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.670948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.670981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.671250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.671677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.671728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.671992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.672298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.672329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.672609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.673059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.673124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.673393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.673849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.673901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.674167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.674572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.674626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.674867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.675158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.675198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.675445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.675744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.675770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.676144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.676529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.676595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.676828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.677119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.677145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.677501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.678024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.678106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.678359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.678684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.678714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.678964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.679188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.679217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.679470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.679861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.679909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.680258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.680534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.680563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.680775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.681052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.681082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.681309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.681715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.681762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.682033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.682320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.682359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.682634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.682878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.682904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.683155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.683425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.683455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.683735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.684128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.684154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.684443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.684946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.684996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.685273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.685669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.685719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.686066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.686324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.686355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.686616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.687012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.687074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.687348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.687743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.687793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.688146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.688420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.126 [2024-04-15 02:04:53.688449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.126 qpair failed and we were unable to recover it. 00:30:08.126 [2024-04-15 02:04:53.688715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.689037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.689083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.689372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.689820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.689869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.690193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.690463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.690493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.690756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.690997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.691023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.691343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.691773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.691825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.692120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.692348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.692375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.692666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.692884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.692910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.693142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.693391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.693420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.693640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.693921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.693947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.694206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.694633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.694664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.694932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.695209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.695239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.695500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.695861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.695885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.696216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.696459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.696500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.696725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.697007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.697036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.697447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.697922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.697976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.698265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.698740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.698791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.699063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.699350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.699379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.699777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.700104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.700131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.700423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.700918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.700968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.701243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.701742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.701791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.702068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.702295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.702321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.702556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.702831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.702856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.703146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.703386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.703428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.703696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.703944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.703976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.704262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.704749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.704801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.705076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.705322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.705354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.705569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.705953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.706004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.706268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.706516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.706545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.706785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.707008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.127 [2024-04-15 02:04:53.707040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.127 qpair failed and we were unable to recover it. 00:30:08.127 [2024-04-15 02:04:53.707277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.707695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.707747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.708027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.708301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.708332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.708602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.709093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.709146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.709430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.709917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.709967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.710209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.710487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.710517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.710795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.711041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.711078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.711327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.711788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.711838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.712133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.712366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.712392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.712703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.713000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.713030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.713338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.713791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.713839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.714086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.714335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.714366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.714578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.714824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.714855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.715105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.715340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.715369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.715622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.715847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.715873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.716107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.716354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.716383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.716647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.716869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.716900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.717124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.717376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.717406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.717629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.717858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.717885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.718091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.718294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.718321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.718571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.718828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.718857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.719134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.719385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.719417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.719670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.719920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.719951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.720178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.720408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.720440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.720708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.720923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.720953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.721203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.721454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.721483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.721716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.721987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.722014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.128 qpair failed and we were unable to recover it. 00:30:08.128 [2024-04-15 02:04:53.722250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.722476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.128 [2024-04-15 02:04:53.722507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.722731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.722958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.722989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.723241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.723531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.723561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.723808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.724031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.724067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.724320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.724740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.724792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.725012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.725243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.725270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.725466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.725723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.725750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.726026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.726280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.726310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.726558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.726776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.726806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.727062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.727284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.727313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.727535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.727739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.727766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.728009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.728268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.728298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.728547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.728868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.728919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.729172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.729460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.729490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.729709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.729904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.729931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.730153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.730363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.730390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.730644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.730951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.730978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.731211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.731557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.731631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.731873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.732138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.732168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.732414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.732649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.732679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.732927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.733180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.733211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.733457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.733675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.733704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.733945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.734221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.734251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.734523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.734772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.734802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.735068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.735285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.735315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.735563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.735790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.735817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.736103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.736422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.736475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.129 qpair failed and we were unable to recover it. 00:30:08.129 [2024-04-15 02:04:53.736721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.129 [2024-04-15 02:04:53.737018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.737055] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.737263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.737538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.737567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.737788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.738036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.738073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.738316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.738679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.738729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.739009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.739239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.739266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.739551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.739971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.740029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.740321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.740531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.740560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.740787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.741037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.741073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.741323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.741579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.741608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.741862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.742096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.742126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.742375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.742845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.742898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.743144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.743419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.743446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.743723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.743965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.743994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.744243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.744489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.744518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.745083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.745355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.745384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.745655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.745953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.745980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.746242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.746675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.746726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.746992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.747231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.747261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.747489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.747737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.747766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.748017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.748256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.748300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.748568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.748973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.749030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.749281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.749627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.749653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.749880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.750218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.750250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.750473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.750695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.750721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.750998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.751234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.751261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.751481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.751965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.752015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.752293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.752513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.752542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.752784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.753030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.753067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.130 [2024-04-15 02:04:53.753295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.753528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.130 [2024-04-15 02:04:53.753570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.130 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.753801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.754039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.754078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.754300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.754521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.754552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.754776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.754995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.755024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.755263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.755509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.755546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.755826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.756068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.756113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.131 [2024-04-15 02:04:53.756334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.756583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.131 [2024-04-15 02:04:53.756613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.131 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.756896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.757159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.757192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.757469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.757894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.757947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.758201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.758406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.758433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.758658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.758870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.758899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.759131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.759401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.759465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.759706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.759951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.759980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.760200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.760444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.760473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.760689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.760930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.760964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.761215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.761495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.761524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.761847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.762093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.762121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.762372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.762698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.762752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.763004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.763251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.763282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.763548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.763765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.763796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.764057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.764329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.764359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.764584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.764987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.765038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.765338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.765774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.765828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.766068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.766349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.766378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.766592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.766820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.766855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.767115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.767362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.767392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.767656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.767988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.768053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.768310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.768561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.768592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.768846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.769068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.769096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.769332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.769635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.769667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.770021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.770330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.770357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.770583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.770838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.770870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.771097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.771349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.771379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.397 qpair failed and we were unable to recover it. 00:30:08.397 [2024-04-15 02:04:53.771650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.397 [2024-04-15 02:04:53.772086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.772133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.772407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.772762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.772817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.773085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.773330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.773356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.773582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.773837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.773866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.774112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.774395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.774421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.774651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.775042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.775104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.775357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.775738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.775788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.776005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.776271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.776301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.776551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.776787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.776815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.777031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.777289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.777320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.777541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.777758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.777788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.778040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.778323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.778353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.778600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.778848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.778877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.779159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.779433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.779462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.779676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.779902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.779930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.780182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.780405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.780436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.780660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.781064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.781121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.781370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.781593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.781623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.781899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.782158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.782188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.782444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.782689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.782718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.782940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.783197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.783229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.783578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.784040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.784103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.784354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.784596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.784625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.784889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.785113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.785143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.785359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.785581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.785612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.785914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.786158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.786189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.786463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.786766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.786795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.787042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.787309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.787340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.398 qpair failed and we were unable to recover it. 00:30:08.398 [2024-04-15 02:04:53.787562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.787794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.398 [2024-04-15 02:04:53.787824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.788069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.788341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.788371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.788619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.788947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.789002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.789261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.789531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.789560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.789838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.790111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.790141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.790363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.790632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.790661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.790934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.791191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.791221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.791477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.791697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.791724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.791978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.792268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.792297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.792737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.793043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.793093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.793369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.793835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.793884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.794127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.794349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.794378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.794656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.795124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.795155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.795429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.795680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.795709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.795941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.796193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.796224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.796469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.796707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.796747] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.796996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.797278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.797308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.797555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.797758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.797786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.798039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.798302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.798331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.798584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.798828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.798859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.799090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.799312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.799341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.799576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.799815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.799845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.800100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.800320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.800349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.800596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.800810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.800842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.801099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.801319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.801347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.801566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.801839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.801868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.802141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.802399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.802428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.802660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.802905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.802934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.803172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.803400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.399 [2024-04-15 02:04:53.803425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.399 qpair failed and we were unable to recover it. 00:30:08.399 [2024-04-15 02:04:53.803657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.803941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.803970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.804213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.804663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.804713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.804992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.805269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.805299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.805544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.805780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.805807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.806056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.806763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.806796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.807065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.807698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.807731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.808040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.808331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.808357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.808619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.808842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.808872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.809107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.809382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.809411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.809745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.810062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.810093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.810320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.810540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.810571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.810825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.811103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.811133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.811425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.811732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.811763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.812137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.812378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.812412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.812700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.812952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.812984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.813250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.813462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.813488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.813746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.814004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.814034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.814274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.814473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.814499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.814748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.814989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.815022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.815259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.815495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.815523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.815785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.815997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.816027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.816256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.816472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.816498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.816719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.816971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.817000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.817243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.817467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.817497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.817758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.818021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.818059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.818360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.818624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.818652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.818965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.819239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.819267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.819511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.819810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.819853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.400 [2024-04-15 02:04:53.820119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.820330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.400 [2024-04-15 02:04:53.820358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.400 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.820637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.820964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.821007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.821222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.821453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.821480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.821726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.821985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.822011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.822231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.822482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.822525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.822796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.823023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.823056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.823292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.823582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.823612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.823873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.824170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.824196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.824430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.824735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.824782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.825037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.825263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.825289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.825558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.825855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.825901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.826190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.826509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.826554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.826872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.827120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.827146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.827382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.827633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.827677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.827895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.828136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.828163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.828415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.828758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.828806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.829075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.829287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.829313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.829591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.829880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.829909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.830140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.830402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.830431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.830694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.830926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.830953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.831186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.831464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.831509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.831734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.831997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.832024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.832245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.832472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.832501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.832787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.833042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.833075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.401 qpair failed and we were unable to recover it. 00:30:08.401 [2024-04-15 02:04:53.833309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.401 [2024-04-15 02:04:53.833560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.833604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.833935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.834201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.834247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.834505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.834772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.834816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.835043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.835436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.835494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.835787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.836023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.836056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.836268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.836529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.836574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.836844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.837103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.837131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.837366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.837677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.837724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.838038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.838283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.838310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.838592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.838929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.838981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.839189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.839417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.839461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.839744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.839953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.839981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.840215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.840462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.840505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.840760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.841100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.841127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.841383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.841682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.841726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.841950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.842155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.842182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.842409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.842729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.842774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.842999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.843374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.843433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.843703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.843956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.844003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.844244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.844472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.844501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.844748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.845029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.845064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.845269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.845591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.845634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.845884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.846206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.846234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.846504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.846835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.846891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.847130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.847334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.847363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.847618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.847943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.847973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.848228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.848453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.848497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.848778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.849021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.849062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.849299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.849539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.849582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.402 qpair failed and we were unable to recover it. 00:30:08.402 [2024-04-15 02:04:53.849881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.402 [2024-04-15 02:04:53.850188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.850215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.850536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.850799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.850845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.851080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.851294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.851330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.851578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.851876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.851923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.852188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.852410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.852462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.852711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.853057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.853087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.853336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.853592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.853636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.853894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.854145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.854172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.854404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.854698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.854742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.855019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.855257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.855284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.855528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.855768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.855797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.856030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.856262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.856289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.856546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.856815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.856859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.857098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.857323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.857350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.857567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.857813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.857862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.858087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.858298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.858333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.858574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.858884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.858934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.859173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.859403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.859446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.859724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.859939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.859968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.860226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.860500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.860545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.860810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.861061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.861099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.861332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.861652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.861697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.861977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.862245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.862271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.862545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.862866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.862911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.863134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.863379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.863427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.863685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.863976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.864024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.864237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.864539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.864582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.864838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.865094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.865128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.403 [2024-04-15 02:04:53.865357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.865680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.403 [2024-04-15 02:04:53.865715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.403 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.865987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.866240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.866268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.866517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.866803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.866848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.867081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.867285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.867312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.867575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.867820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.867864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.868095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.868297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.868324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.868600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.868895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.868945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.869206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.869463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.869506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.869796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.870038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.870072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.870276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.870519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.870549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.870834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.871083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.871111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.871315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.871588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.871632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.871867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.872129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.872157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.872437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.872722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.872768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.872969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.873170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.873199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.873453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.873748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.873792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.874021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.874233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.874262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.874528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.874887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.874932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.875187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.875466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.875511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.875785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.876108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.876135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.876348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.876641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.876670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.876941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.877161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.877190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.877442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.877803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.877850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.878064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.878269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.878295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.878608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.878902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.878948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.879210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.879494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.879537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.879810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.880026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.880058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.880377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.880630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.880675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.880998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.881242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.881269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.404 qpair failed and we were unable to recover it. 00:30:08.404 [2024-04-15 02:04:53.881535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.404 [2024-04-15 02:04:53.881829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.881862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.882136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.882344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.882370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.882636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.882963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.883015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.883261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.883490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.883533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.883827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.884044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.884077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.884310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.884562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.884606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.884859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.885125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.885151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.885349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.885670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.885713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.886007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.886230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.886258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.886533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.886846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.886894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.887136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.887358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.887384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.887634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.887911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.887956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.888252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.888518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.888562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.888854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.889076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.889104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.889298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.889574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.889619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.889862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.890143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.890170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.890403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.890704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.890749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.891058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.891287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.891313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.891565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.891811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.891855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.892079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.892311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.892354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.892676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.893018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.893061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.893306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.893583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.893612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.893846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.894140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.894167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.894370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.894605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.894647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.894946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.895195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.895222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.895457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.895755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.895798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.896106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.896361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.896402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.896673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.897116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.897143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.897342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.897625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.405 [2024-04-15 02:04:53.897668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.405 qpair failed and we were unable to recover it. 00:30:08.405 [2024-04-15 02:04:53.898004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.898252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.898278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.898527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.898769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.898812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.899051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.899249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.899275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.899543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.899829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.899862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.900083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.900302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.900329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.900580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.900919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.900952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.901179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.901433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.901462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.901763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.902000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.902026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.902269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.902541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.902570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.902865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.903137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.903163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.903386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.903635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.903677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.903965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.904218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.904244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.904493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.904874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.904939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.905241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.905496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.905539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.905815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.906067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.906105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.906319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.906581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.906622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.906878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.907146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.907172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.907394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.907618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.907660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.907922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.908179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.908206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.908468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.908760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.908807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.909012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.909266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.909292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.909574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.909867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.909911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.910169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.910434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.910476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.910733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.911111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.406 [2024-04-15 02:04:53.911137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.406 qpair failed and we were unable to recover it. 00:30:08.406 [2024-04-15 02:04:53.911445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.911875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.911926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.912127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.912383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.912409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.912793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.913081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.913118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.913383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.913744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.913789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.914037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.914315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.914354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.914639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.915011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.915064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.915293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.915595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.915640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.915926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.916217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.916244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.916513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.916838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.916870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.917147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.917393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.917437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.917713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.918108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.918134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.918402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.918686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.918729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.919010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.919290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.919316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.919611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.920041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.920091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.920569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.920884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.920929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.921242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.921636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.921678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.921985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.922279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.922306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.922589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.922892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.922936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.923196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.923451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.923503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.923841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.924101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.924145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.924423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.924845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.924888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.925114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.925343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.925369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.925691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.925987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.926029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.926426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.926814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.926865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.927122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.927382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.927407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.927684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.928078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.928140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.928398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.928694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.928738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.929029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.929281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.929308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.407 qpair failed and we were unable to recover it. 00:30:08.407 [2024-04-15 02:04:53.929593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.929855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.407 [2024-04-15 02:04:53.929896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.930191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.930625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.930680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.930932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.931248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.931274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.931532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.931972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.932023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.932437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.932804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.932857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.933127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.933365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.933405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.933633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.934116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.934142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.934390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.934607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.934654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.934945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.935268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.935293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.935550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.935824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.935868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.936144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.936468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.936494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.936743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.937118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.937145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.937410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.937770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.937826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.938157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.938380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.938405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.938726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.939055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.939099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.939335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.939589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.939632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.939926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.940194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.940219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.940467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.940838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.940889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.941163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.941367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.941409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.941700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.941958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.942005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.942249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.942508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.942551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.942801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.943154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.943180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.943448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.943726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.943770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.944069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.944396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.944421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.944704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.944969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.945011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.945299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.945793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.945840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.946093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.946338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.946375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.946675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.947054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.947108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.947363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.947689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.947738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.408 qpair failed and we were unable to recover it. 00:30:08.408 [2024-04-15 02:04:53.947987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.408 [2024-04-15 02:04:53.948267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.948293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.948566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.948847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.948913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.949211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.949489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.949532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.949812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.950118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.950159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.950409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.950738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.950789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.951077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.951305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.951342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.951637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.951968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.952013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.952418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.952939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.952991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.953273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.953566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.953615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.953852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.954127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.954153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.954429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.954756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.954814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.955103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.955380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.955406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.955649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.955940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.955984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.956273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.956575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.956619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.956914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.957246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.957287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.957575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.958068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.958135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.958370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.958634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.958676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.959004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.959421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.959478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.959772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.960083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.960118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.960431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.960771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.960817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.961089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.961372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.961398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.961664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.961957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.962001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.962315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.962668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.962711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.963009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.963440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.963499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.963788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.964089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.964117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.964410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.964673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.964717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.965005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.965256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.965283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.965538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.965814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.965857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.966083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.966340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.966367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.409 qpair failed and we were unable to recover it. 00:30:08.409 [2024-04-15 02:04:53.966599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.409 [2024-04-15 02:04:53.966908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.966952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.967173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.967396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.967440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.967753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.968023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.968065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.968329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.968570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.968627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.968875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.969091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.969118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.969342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.969595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.969639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.969894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.970124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.970151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.970427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.970697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.970742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.970985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.971220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.971248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.971550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.971825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.971870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.972128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.972353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.972381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.972607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.972940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.972985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.973250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.973542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.973587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.973821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.974054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.974092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.974330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.974556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.974603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.974885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.975161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.975188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.975465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.975800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.975843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.976116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.976347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.976374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.976629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.976935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.976979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.977203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.977456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.977502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.977857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.978161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.978205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.978464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.978939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.978970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.979224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.979454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.979498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.979751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.980030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.980064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.980312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.980602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.410 [2024-04-15 02:04:53.980647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.410 qpair failed and we were unable to recover it. 00:30:08.410 [2024-04-15 02:04:53.980898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.981170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.981197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.981470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.981736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.981781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.982096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.982325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.982352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.982584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.982869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.982914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.983127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.983423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.983466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.983720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.983961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.983990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.984256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.984492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.984536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.984804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.985014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.985041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.985257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.985540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.985585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.985841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.986097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.986125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.986382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.986644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.986688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.986906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.987176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.987204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.987433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.987706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.987751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.987979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.988233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.988261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.988520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.988804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.988835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.989110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.989358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.989403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.989626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.989902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.989948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.990207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.990448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.990491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.990769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.991003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.991029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.991231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.991510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.991555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.991838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.992087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.992114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.992368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.992606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.992636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.992880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.993098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.993125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.993388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.993608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.993654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.993886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.994150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.994196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.994462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.994730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.994780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.995006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.995246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.995275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.995548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.995793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.995838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.411 qpair failed and we were unable to recover it. 00:30:08.411 [2024-04-15 02:04:53.996051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.411 [2024-04-15 02:04:53.996278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.996305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.996562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.996803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.996848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.997068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.997314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.997341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.997573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.997824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.997870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.998121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.998349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.998376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.998628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.998967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.999028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.999264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.999537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:53.999581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:53.999839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.000108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.000137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.000340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.000587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.000631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.000903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.001159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.001188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.001444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.001717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.001746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.001984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.002252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.002280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.002518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.002764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.002808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.003026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.003276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.003304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.003564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.003832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.003878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.004125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.004353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.004379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.004609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.004852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.004882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.005165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.005419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.005464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.005718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.005982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.006009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.006255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.006486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.006528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.006745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.006957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.006983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.007207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.007460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.007504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.007750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.008028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.008067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.008336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.008555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.008599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.008833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.009074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.009101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.009331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.009605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.009650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.009917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.010163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.010190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.010470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.010736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.010781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.412 qpair failed and we were unable to recover it. 00:30:08.412 [2024-04-15 02:04:54.010983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.011203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.412 [2024-04-15 02:04:54.011230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.011487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.011753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.011797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.012021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.012256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.012283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.012512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.012813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.012859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.013057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.013321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.013348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.013595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.013843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.013874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.014127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.014379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.014406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.014650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.014933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.014978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.015211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.015442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.015486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.015710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.016013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.016068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.016311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.016583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.016626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.016877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.017114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.017142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.017396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.017692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.017743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.017967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.018221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.018248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.018528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.018820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.018864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.019089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.019320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.019347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.019606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.019840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.019884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.020127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.020326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.020355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.020609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.020992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.021053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.021312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.021680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.021745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.022024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.022266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.022293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.022548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.022851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.022912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.023162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.023416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.023460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.023709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.024105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.024132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.024382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.024612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.024657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.024952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.025194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.025221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.025492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.025774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.025818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.026036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.026293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.026319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.026603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.026965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.027014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.413 qpair failed and we were unable to recover it. 00:30:08.413 [2024-04-15 02:04:54.027242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.413 [2024-04-15 02:04:54.027490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.027539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.027805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.028075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.028103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.028330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.028617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.028661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.028947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.029163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.029190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.029465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.029731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.029775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.029996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.030213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.030239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.030514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.031106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.031133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.031365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.031681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.031735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.031992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.032294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.032321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.032725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.033015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.033075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.033299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.033600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.033648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.033902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.034218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.034244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.034492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.034805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.034859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.035118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.035343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.035369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.035702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.035931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.035959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.036218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.036495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.036539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.036803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.037083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.037110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.037478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.037795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.037843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.038128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.038370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.414 [2024-04-15 02:04:54.038412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.414 qpair failed and we were unable to recover it. 00:30:08.414 [2024-04-15 02:04:54.038681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.039089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.039145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.039382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.039628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.039677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.039957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.040237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.040266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.040558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.041026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.041088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.041331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.041608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.041652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.041961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.042205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.042247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.042520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.042798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.042842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.043065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.043300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.043327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.043635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.044137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.044163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.044381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.044656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.044699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.044955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.045204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.045231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.045513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.045818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.045867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.046084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.046343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.046384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.046674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.047013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.047073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.047311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.047560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.047604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.047824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.048112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.048138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.048411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.048783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.048839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.049059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.049267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.049294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.049520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.049783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.049828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.050084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.050386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.050412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.680 qpair failed and we were unable to recover it. 00:30:08.680 [2024-04-15 02:04:54.050670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.680 [2024-04-15 02:04:54.051118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.051144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.051404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.051633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.051680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.052026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.052361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.052404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.052651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.052978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.053022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.053287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.053817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.053871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.054095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.054335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.054362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.054637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.054918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.054961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.055311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.055814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.055872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.056113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.056348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.056373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.056636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.056990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.057038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.057309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.057732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.057791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.058069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.058583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.058641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.058939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.059222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.059251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.059535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.059825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.059872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.060168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.060566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.060622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.060909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.061215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.061243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.061511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.061804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.061849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.062102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.062359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.062401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.062692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.063143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.063185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.063477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.063990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.064039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.064349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.064646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.064689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.064981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.065293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.065320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.065610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.066125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.066151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.066525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.066910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.066965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.067262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.067657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.067706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.067996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.068301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.068327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.068641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.069023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.069073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.069302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.069557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.069599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.069862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.070125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.681 [2024-04-15 02:04:54.070165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.681 qpair failed and we were unable to recover it. 00:30:08.681 [2024-04-15 02:04:54.070367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.070627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.070671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.070938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.071312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.071354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.071646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.072116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.072141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.072396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.072659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.072704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.072965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.073486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.073526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.073825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.074062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.074089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.074382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.074774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.074826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.075080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.075291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.075317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.075596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.075991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.076059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.076479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.076766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.076814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.077040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.077294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.077320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.077595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.078123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.078149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.078417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.078750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.078793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.079069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.079310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.079350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.079599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.079896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.079940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.080294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.080742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.080791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.080983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.081232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.081276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.081507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.081902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.081954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.082219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.082509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.082538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.082847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.083164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.083190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.083483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.083926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.083973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.084212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.084503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.084532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.084839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.085192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.085233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.085477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.085789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.085845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.086105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.086366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.086392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.086669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.086971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.087000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.087278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.087516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.087561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.087859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.088179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.088206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.682 qpair failed and we were unable to recover it. 00:30:08.682 [2024-04-15 02:04:54.088439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.682 [2024-04-15 02:04:54.088928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.088981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.089235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.089485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.089529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.089810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.090140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.090168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.090611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.091126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.091155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.091454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.091755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.091801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.092079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.092552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.092616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.092879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.093149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.093177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.093442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.093777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.093830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.094120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.094409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.094436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.094734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.095039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.095102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.095331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.095586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.095629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.095924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.096248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.096274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.096614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.097073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.097129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.097436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.097708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.097755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.098032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.098532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.098572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.098884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.099201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.099228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.099526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.100102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.100128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.100552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.100957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.101013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.101279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.101555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.101611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.101907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.102237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.102264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.102653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.103000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.103056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.103243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.103536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.103582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.103831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.104151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.104178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.104433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.104798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.104855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.105119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.105379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.105422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.105678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.106139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.106166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.106443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.106835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.106889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.107168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.107564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.107620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.683 [2024-04-15 02:04:54.107927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.108188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.683 [2024-04-15 02:04:54.108229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.683 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.108522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.108980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.109030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.109232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.109517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.109561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.109824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.110086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.110113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.110352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.110573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.110617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.110864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.111148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.111175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.111379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.111672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.111702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.112009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.112337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.112363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.112621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.113106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.113130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.113354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.113682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.113726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.114002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.114438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.114493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.114764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.115105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.115132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.115359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.115729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.115775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.116056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.116467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.116522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.116814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.117135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.117161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.117439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.117789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.117833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.118067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.118308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.118348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.118622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.119075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.119116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.119377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.119671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.119714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.120040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.120401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.120442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.120735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.121135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.121169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.121414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.121652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.121695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.121943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.122215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.122240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.122483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.122790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.122834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.684 qpair failed and we were unable to recover it. 00:30:08.684 [2024-04-15 02:04:54.123122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.123518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.684 [2024-04-15 02:04:54.123573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.123834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.124067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.124094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.124335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.124588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.124632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.124923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.125267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.125294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.125787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.126027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.126066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.126338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.126656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.126709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.127037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.127298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.127339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.127644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.127964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.128009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.128243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.128665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.128713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.128965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.129257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.129284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.129559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.129841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.129886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.130167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.130611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.130650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.130914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.131239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.131267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.131556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.131891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.131940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.132295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.132836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.132888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.133123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.133358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.133384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.133724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.134164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.134189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.134438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.134688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.134732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.135011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.135272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.135313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.135625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.136075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.136119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.136409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.136874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.136924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.137147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.137383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.137409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.137702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.137928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.137955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.138244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.138766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.138819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.139099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.139330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.139357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.139663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.140140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.140165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.140400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.140656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.140701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.140995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.141257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.141284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.141569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.141889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.141931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.685 qpair failed and we were unable to recover it. 00:30:08.685 [2024-04-15 02:04:54.142196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.685 [2024-04-15 02:04:54.142493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.142537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.142831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.143141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.143168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.143436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.143817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.143871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.144228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.144449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.144475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.144739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.145004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.145060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.145332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.145576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.145620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.145909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.146178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.146219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.146513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.146775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.146822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.147074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.147383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.147410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.147711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.148153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.148178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.148495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.149063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.149105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.149357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.149585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.149628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.149912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.150241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.150268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.150538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.150947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.151002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.151276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.151587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.151634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.151929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.152259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.152285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.152574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.153022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.153104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.153340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.153594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.153637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.153889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.154236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.154263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.154489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.154953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.155006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.155302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.155790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.155841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.156109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.156358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.156399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.156656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.156904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.156949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.157240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.157816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.157867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.158194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.158518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.158561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.158813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.159073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.159114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.159327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.159767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.159822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.160112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.160389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.160416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.686 [2024-04-15 02:04:54.160727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.161018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.686 [2024-04-15 02:04:54.161074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.686 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.161335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.161797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.161847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.162123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.162410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.162435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.162718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.162970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.163014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.163283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.163836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.163888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.164097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.164347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.164374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.164642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.165037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.165101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.165370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.165615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.165660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.165910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.166224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.166250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.166564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.166887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.166913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.167227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.167498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.167543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.167889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.168194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.168220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.168540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.168858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.168903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.169178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.169438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.169482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.169745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.170002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.170044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.170317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.170813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.170865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.171207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.171705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.171755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.172013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.172290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.172317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.172611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.173041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.173116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.173364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.173634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.173678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.173966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.174227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.174254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.174478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.174724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.174767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.175019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.175472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.175513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.175786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.176074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.176118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.176513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.177065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.177114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.177370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.177593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.177637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.177942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.178183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.178210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.178502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.178716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.178743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.178995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.179256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.179282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.687 qpair failed and we were unable to recover it. 00:30:08.687 [2024-04-15 02:04:54.179534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.179809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.687 [2024-04-15 02:04:54.179852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.180122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.180358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.180384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.180657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.180973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.181017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.181301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.181574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.181618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.181911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.182161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.182188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.182439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.182906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.182956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.183215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.183504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.183533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.183845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.184122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.184148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.184578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.185078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.185134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.185390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.185649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.185692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.186124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.186368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.186409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.186691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.187136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.187179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.187552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.188079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.188129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.188395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.188636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.188681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.188915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.189144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.189172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.189464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.190102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.190144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.190430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.190831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.190883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.191182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.191594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.191633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.192010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.192298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.192325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.192592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.192870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.192914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.193196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.193483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.193512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.193820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.194116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.194143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.194373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.194750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.194800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.195044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.195261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.195288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.195608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.196068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.196128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.196418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.196724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.196751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.197125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.197404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.197429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.197701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.198103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.198130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.198471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.198997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.199058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.688 qpair failed and we were unable to recover it. 00:30:08.688 [2024-04-15 02:04:54.199313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.688 [2024-04-15 02:04:54.199614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.199658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.200009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.200329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.200355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.200610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.200901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.200945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.201177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.201430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.201473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.201762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.201993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.202019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.202261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.202565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.202609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.202863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.203122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.203148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.203403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.203783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.203836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.204111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.204478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.204536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.204855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.205146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.205173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.205466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.205913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.205966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.206189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.206444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.206486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.206782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.207103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.207129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.207381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.207677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.207705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.207988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.208421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.208477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.208735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.209123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.209153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.209443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.209808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.209859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.210123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.210409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.210435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.210715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.211200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.211226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.211504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.211825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.211852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.212111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.212308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.212334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.212569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.212897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.212940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.213251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.213520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.213564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.213863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.214110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.214136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.689 [2024-04-15 02:04:54.214395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.214675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.689 [2024-04-15 02:04:54.214719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.689 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.214973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.215232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.215260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.215514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.215757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.215802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.216027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.216245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.216273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.216530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.216772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.216817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.217075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.217283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.217313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.217542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.218039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.218109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.218346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.218623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.218667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.218961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.219221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.219249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.219462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.219736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.219780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.220029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.220258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.220287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.220559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.220768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.220797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.221025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.221229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.221258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.221516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.221782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.221826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.222067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.222315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.222342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.222604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.222849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.222895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.223148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.223399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.223443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.223723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.223989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.224032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.224308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.224552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.224599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.224856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.225095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.225132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.225329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.225586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.225630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.225883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.226207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.226235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.226507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.226769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.226812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.227113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.227369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.227413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.227669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.227910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.227955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.228161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.228414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.228460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.228707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.228946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.228973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.229167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.229393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.229436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.229629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.230098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.230126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.690 qpair failed and we were unable to recover it. 00:30:08.690 [2024-04-15 02:04:54.230382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.690 [2024-04-15 02:04:54.230664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.230710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.230992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.231238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.231266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.231494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.231855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.231906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.232162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.232420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.232463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.232690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.233075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.233127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.233374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.233627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.233671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.233949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.234173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.234204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.234528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.234804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.234832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.235057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.235281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.235308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.235598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.235831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.235876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.236131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.236359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.236386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.236678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.236950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.236996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.237222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.237416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.237444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.237663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.237947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.237992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.238193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.238477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.238506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.238771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.239009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.239035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.239291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.239647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.239706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.239966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.240205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.240232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.240461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.240710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.240753] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.240998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.241222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.241250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.241541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.242094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.242121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.242371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.242598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.242641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.242914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.243162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.243189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.243440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.243864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.243914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.244121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.244373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.244416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.244673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.245160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.245187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.245433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.245709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.245756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.245955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.246148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.246175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.246457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.246751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.691 [2024-04-15 02:04:54.246797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.691 qpair failed and we were unable to recover it. 00:30:08.691 [2024-04-15 02:04:54.247024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.247266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.247304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.247538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.247836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.247880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.248094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.248315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.248341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.248563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.249003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.249059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.249259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.249511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.249555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.249813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.250028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.250065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.250321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.250551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.250594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.250830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.251094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.251122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.251324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.251541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.251595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.251848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.252091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.252117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.252346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.252579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.252623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.252876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.253097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.253124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.253351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.253585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.253616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.253879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.254124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.254151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.254417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.254689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.254733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.254961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.255241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.255268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.255562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.255807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.255850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.256066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.256329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.256355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.256613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.256877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.256921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.257151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.257372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.257417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.257666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.257937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.257982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.258209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.258496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.258545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.258789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.259029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.259065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.259284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.259578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.259623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.259878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.260100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.260127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.260353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.260607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.260650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.260865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.261130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.261156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.261391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.261663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.692 [2024-04-15 02:04:54.261690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.692 qpair failed and we were unable to recover it. 00:30:08.692 [2024-04-15 02:04:54.261943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.262192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.262219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.262459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.262723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.262766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.262973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.263178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.263207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.263471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.263715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.263758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.263977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.264206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.264234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.264491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.264753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.264797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.265037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.265244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.265270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.265503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.265780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.265825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.266030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.266273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.266300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.266559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.266803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.266849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.267099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.267352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.267379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.267664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.267949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.267994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.268222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.268502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.268547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.268804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.269031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.269065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.269300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.269543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.269572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.269878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.270153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.270180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.270425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.270694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.270739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.270959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.271177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.271205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.271436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.271669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.271712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.271933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.272133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.272159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.272408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.272699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.272727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.272955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.273207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.273251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.273523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.273810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.273846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.274067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.274326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.274369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.274618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.274903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.274950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.275219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.275483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.275528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.275857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.276106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.276132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.693 qpair failed and we were unable to recover it. 00:30:08.693 [2024-04-15 02:04:54.276377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.276627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.693 [2024-04-15 02:04:54.276678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.276961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.277243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.277269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.277509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.277774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.277818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.278040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.278276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.278302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.278573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.278884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.278930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.279175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.279454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.279499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.279779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.279989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.280014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.280234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.280480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.280523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.280814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.281213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.281240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.281457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.281748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.281794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.282067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.282268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.282294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.282648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.283010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.283039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.283310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.283546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.283590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.283890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.284234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.284261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.284555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.284856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.284905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.285172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.285417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.285448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.285716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.285993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.286020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.286281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.286571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.286615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.286839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.287117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.287145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.287420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.287715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.287765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.288056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.288310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.288337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.288619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.288916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.288962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.289224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.289502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.289546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.289825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.290094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.290119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.290354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.290628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.290674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.290956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.291262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.291288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.694 qpair failed and we were unable to recover it. 00:30:08.694 [2024-04-15 02:04:54.291577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.291881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.694 [2024-04-15 02:04:54.291927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.292180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.292466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.292510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.292734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.292977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.293004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.293240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.293502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.293545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.293810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.294069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.294100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.294380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.294745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.294792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.295026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.295292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.295330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.295589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.295898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.295944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.296216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.296499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.296547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.296744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.296974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.297001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.297232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.297476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.297520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.297798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.298065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.298106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.298331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.298590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.298634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.298888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.299162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.299189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.299450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.299749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.299794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.300052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.300238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.300264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.300521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.300789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.300833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.301112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.301403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.301450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.301704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.301957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.301984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.302251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.302548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.302592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.302845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.303141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.303168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.303441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.303781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.303825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.304070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.304306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.304332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.304581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.304847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.304891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.305125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.305405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.305448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.305702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.306025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.306082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.306339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.306601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.306644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.306918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.307195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.307222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.307472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.307731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.307773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.695 qpair failed and we were unable to recover it. 00:30:08.695 [2024-04-15 02:04:54.308021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.695 [2024-04-15 02:04:54.308279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.308306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.308704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.308941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.308986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.309212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.309489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.309536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.309778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.309982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.310008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.310280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.310495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.310538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.310793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.311081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.311108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.311361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.311574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.311616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.311860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.312174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.312201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.312485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.312804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.312845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.313111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.313336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.313363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.313638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.313958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.314000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.314261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.314487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.314528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.314758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.314986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.315012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.315215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.315464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.315490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.315717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.315940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.315968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.316229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.316495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.316522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.316716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.316941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.316968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.317206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.317430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.317474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.317686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.317973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.318004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.318281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.318597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.318622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.318865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.319090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.319117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.319324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.319534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.319561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.319803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.319996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.320022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.320282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.320652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.320678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.696 [2024-04-15 02:04:54.320906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.321099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.696 [2024-04-15 02:04:54.321127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.696 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.321329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.321556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.321583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.321902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.322188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.322225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.322460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.322716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.322743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.322948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.323166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.323198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.323422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.323640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.323668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.323873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.324076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.324105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.324335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.324587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.324614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.324922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.325142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.325169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.325369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.325557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.325584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.325827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.326077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.326104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.326327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.326513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.326540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.326739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.326991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.327017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.327248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.327495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.327522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.327710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.327934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.327965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.328162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.328419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.328445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.328670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.328923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.328950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.329218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.329464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.329491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.329707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.329954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.329980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.987 qpair failed and we were unable to recover it. 00:30:08.987 [2024-04-15 02:04:54.330199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.330448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.987 [2024-04-15 02:04:54.330475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.330697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.330893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.330920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.331139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.331383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.331409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.331627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.331839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.331866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.332141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.332388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.332415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.332637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.332839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.332871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.333096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.333309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.333336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.333580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.333802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.333829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.334056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.334301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.334328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.334578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.334801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.334827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.335037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.335248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.335275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.335497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.335743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.335769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.335958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.336195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.336222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.336463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.336687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.336714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.336958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.337181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.337209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.337453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.337703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.337730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.337965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.338218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.338245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.338457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.338676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.338703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.338948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.339171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.339198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.339446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.339697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.339724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.340000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.340370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.340410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.340680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.340872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.340900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.341097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.341308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.341334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.341559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.341750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.341776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.342088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.342319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.342346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.342579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.342803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.342831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.343127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.343406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.343433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.343698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.343928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.343954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.344191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.344450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.344477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.988 qpair failed and we were unable to recover it. 00:30:08.988 [2024-04-15 02:04:54.344691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.988 [2024-04-15 02:04:54.344941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.344967] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.345222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.345462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.345504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.345873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.346109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.346136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.346360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.346598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.346624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.346849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.347086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.347114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.347309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.347553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.347578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.347823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.348010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.348036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.348296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.348536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.348560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.348788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.349257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.349747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.349965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.350295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.350493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.350520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.350750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.350951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.350976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.351237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.351463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.351489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.351740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.351939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.351965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.352210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.352457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.352482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.352708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.352937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.352964] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.353198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.353446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.353471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.353717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.353948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.353973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.354199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.354417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.354442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.354694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.354920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.354946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.355192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.355427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.355451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.355707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.355916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.355941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.356202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.356458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.356483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.356742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.356934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.356960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.357189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.357432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.357457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.357677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.357938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.357963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.358208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.358425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.358450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.989 [2024-04-15 02:04:54.358670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.358912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.989 [2024-04-15 02:04:54.358937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.989 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.359185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.359380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.359407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.359628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.359869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.359894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.360142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.360366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.360391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.360627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.360888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.360913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.361125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.361372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.361397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.361602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.361856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.361882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.362110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.362335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.362375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.362653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.362903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.362928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.363144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.363360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.363385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.363595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.363857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.363883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.364074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.364272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.364297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.364572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.364904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.364929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.365147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.365375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.365400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.365660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.365864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.365888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.366190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.366378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.366403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.366629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.366843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.366868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.367055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.367288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.367313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.367533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.367754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.367780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50fc000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.367911] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15ef100 is same with the state(5) to be set 00:30:08.990 [2024-04-15 02:04:54.368207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.368415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.368444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.368671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.368922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.368961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.369254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.369471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.369496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.369739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.369964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.369988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.370197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.370396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.370423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.370647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.370854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.370880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.371095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.371319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.371345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.371553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.371849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.371875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.372099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.372321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.372346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.372570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.372791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.372816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.990 qpair failed and we were unable to recover it. 00:30:08.990 [2024-04-15 02:04:54.373042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.990 [2024-04-15 02:04:54.373247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.373272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.373466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.373762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.373787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.373991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.374190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.374216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.374416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.374631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.374656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.374903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.375155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.375180] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.375379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.375588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.375612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.375864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.376060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.376085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.376301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.376524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.376549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.376789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.377008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.377033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.377263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.377515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.377541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.377771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.378256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.378694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.378921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.379142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.379385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.379410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.379659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.379849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.379874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.380101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.380306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.380330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.380585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.380807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.380832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.381028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.381249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.381274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.381504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.381712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.381736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.381970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.382161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.382186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.382392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.382642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.382668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.382972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.383244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.383269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.991 [2024-04-15 02:04:54.383471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.383699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.991 [2024-04-15 02:04:54.383724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.991 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.383947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.384194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.384220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.384422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.384714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.384738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.384975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.385175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.385201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.385424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.385675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.385701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.385928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.386120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.386146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.386371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.386596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.386621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.386850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.387074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.387101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.387331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.387601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.387626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.387844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.388065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.992 [2024-04-15 02:04:54.388091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.992 qpair failed and we were unable to recover it. 00:30:08.992 [2024-04-15 02:04:54.388291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.388534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.388559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.388777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.388977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.389002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.389224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.389437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.389462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.389698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.389896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.389921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.390154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.390344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.390369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.390646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.390940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.390965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.391162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.391395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.391420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.391646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.391879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.391904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.392146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.392368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.392393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.392615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.392840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.392865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.393116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.393311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.393336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.393579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.393771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.393795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.394016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.394217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.394242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.394473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.394696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.394721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.394966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.395160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.395187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.395410] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.395624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.395649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.395844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.396066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.396092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.396312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.396555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.396580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.396800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.397299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.397744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.397958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.398211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.398431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.398456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.398701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.398922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.398947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.399171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.399392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.399417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.399702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.399924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.399948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.400206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.400455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.400480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.400743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.400994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.401018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.401276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.401516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.401541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.993 qpair failed and we were unable to recover it. 00:30:08.993 [2024-04-15 02:04:54.401766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.401956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.993 [2024-04-15 02:04:54.401981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.402213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.402433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.402457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.402678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.402896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.402920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.403141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.403334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.403374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.403603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.403850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.403875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.404094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.404288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.404313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.404507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.404722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.404749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.404998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.405223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.405248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.405466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.405683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.405707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.405927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.406139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.406164] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.406386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.406638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.406663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.406906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.407135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.407161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.407389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.407635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.407660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.407875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.408158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.408184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.408408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.408629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.408654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.408960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.409189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.409213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.409452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.409676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.409702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.409901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.410126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.410151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.410371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.410595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.410620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.410866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.411280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.411723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.411943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.412191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.412434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.412459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.412677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.412925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.412950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.413196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.413398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.413423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.413644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.413843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.413868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.414091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.414292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.414317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.414567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.414763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.414787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.414979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.415197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.415222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.994 qpair failed and we were unable to recover it. 00:30:08.994 [2024-04-15 02:04:54.415418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.994 [2024-04-15 02:04:54.415694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.415718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.415968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.416194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.416220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.416439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.416685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.416709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.416953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.417176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.417201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.417420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.417620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.417645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.417869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.418121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.418147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.418351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.418541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.418566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.418808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.419296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.419764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.419984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.420204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.420409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.420434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.420659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.420914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.420939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.421131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.421330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.421370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.421602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.421846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.421870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.422075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.422278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.422303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.422524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.422727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.422752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.423005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.423240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.423266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.423490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.423679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.423704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.423950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.424152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.424178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.424373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.424616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.424641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.424861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.425063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.425088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.425307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.425543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.425571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.425795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.426246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.426711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.426976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.427178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.427424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.427450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.427646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.427903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.427927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.428150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.428379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.428404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.428659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.428889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.995 [2024-04-15 02:04:54.428914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.995 qpair failed and we were unable to recover it. 00:30:08.995 [2024-04-15 02:04:54.429142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.429344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.429368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.429578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.429772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.429799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.430021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.430279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.430309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.430570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.430795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.430819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.431067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.431322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.431347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.431551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.431823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.431848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.432085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.432313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.432337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.432564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.432782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.432807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.433023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.433292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.433318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.433542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.433749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.433772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.433999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.434196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.434223] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.434449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.434671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.434696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.434889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.435112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.435142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.435369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.435569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.435595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.435816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.436263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.436726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.436974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.437195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.437415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.437440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.437644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.437846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.437872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.438075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.438292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.438317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.438538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.438821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.438847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.439073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.439322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.439346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.439548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.439794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.439825] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.440055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.440278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.440302] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.440519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.440819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.440844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.441075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.441299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.441323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.441516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.441739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.441765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.442009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.442211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.442236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.996 [2024-04-15 02:04:54.442486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.442708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.996 [2024-04-15 02:04:54.442733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.996 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.442928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.443151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.443176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.443400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.443597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.443624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.443852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.444129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.444154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.444386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.444609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.444633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.444860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.445081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.445107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.445305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.445526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.445551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.445793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.446239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.446694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.446939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.447163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.447379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.447404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.447602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.447795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.447820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.448036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.448241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.448265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.448464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.448683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.448708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.448924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.449114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.449140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.449369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.449629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.449654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.449912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.450110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.450136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.450335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.450535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.450560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.450782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.451002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.451027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.451255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.451476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.451501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.451759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.452292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.452706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.452966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.453190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.453434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.453459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.453642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.453863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.997 [2024-04-15 02:04:54.453888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.997 qpair failed and we were unable to recover it. 00:30:08.997 [2024-04-15 02:04:54.454100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.454324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.454363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.454571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.454770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.454794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.455076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.455327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.455351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.455554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.455772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.455797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.455989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.456210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.456237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.456466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.456689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.456714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.456996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.457312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.457337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.457559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.457771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.457795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.457995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.458210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.458236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.458438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.458682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.458707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.458902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.459124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.459150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.459366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.459586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.459610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.459829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.460076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.460101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.460301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.460518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.460543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.460770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.460989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.461014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.461218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.461491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.461515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.461731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.461977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.462001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.462205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.462433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.462458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.462678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.462886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.462910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.463165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.463355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.463380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.463626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.463831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.463856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.464084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.464320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.464345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.464601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.464816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.464841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.465055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.465302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.465328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.465554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.465774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.465800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.466022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.466250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.466276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.466479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.466665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.466690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.466947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.467205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.467231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.998 qpair failed and we were unable to recover it. 00:30:08.998 [2024-04-15 02:04:54.467437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.998 [2024-04-15 02:04:54.467673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.467697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.467950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.468157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.468183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.468391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.468631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.468657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.468878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.469079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.469104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.469295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.469518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.469543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.469793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.469991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.470015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.470240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.470461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.470486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.470711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.470904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.470929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.471130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.471330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.471355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.471574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.471773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.471799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.472000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.472199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.472227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.472454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.472671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.472696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.472900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.473125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.473150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.473371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.473586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.473612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.473805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.474224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.474663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.474904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.475100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.475295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.475321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.475541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.475767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.475792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.476012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.476219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.476244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.476463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.476678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.476703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.476964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.477187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.477212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.477432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.477630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.477655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.477845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.478316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.478755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.478991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.479188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.479391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.479416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.479644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.479843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.479870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.480069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.480289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:08.999 [2024-04-15 02:04:54.480313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:08.999 qpair failed and we were unable to recover it. 00:30:08.999 [2024-04-15 02:04:54.480541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.480733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.480758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.480975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.481170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.481197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.481419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.481644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.481669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.481891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.482119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.482144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.482369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.482590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.482615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.482808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.483295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.483741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.483976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.484215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.484411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.484436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.484620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.484855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.484879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.485101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.485322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.485346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.485569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.485766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.485791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.485982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.486175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.486201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.486394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.486616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.486641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.486866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.487103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.487128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.487324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.487543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.487568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.487799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.488253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.488661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.488901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.489103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.489347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.489371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.489577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.489774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.489798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.490044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.490245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.490270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.490495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.490692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.490717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.490909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.491113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.491139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.491359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.491557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.491582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.491800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.491987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.492011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.492214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.492404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.492429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.492621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.492815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.492840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.493081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.493284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.000 [2024-04-15 02:04:54.493309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.000 qpair failed and we were unable to recover it. 00:30:09.000 [2024-04-15 02:04:54.493500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.493694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.493719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.493914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.494138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.494163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.494383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.494605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.494631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.494833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.495266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.495713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.495937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.496158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.496424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.496449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.496666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.496859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.496884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.497088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.497347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.497372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.497574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.497767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.497794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.497996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.498222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.498249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.498450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.498678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.498703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.498898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.499096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.499122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.499342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.499536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.499560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.499781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.499977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.500006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.500235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.500470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.500494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.500695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.500916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.500942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.501171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.501414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.501439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.501632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.501823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.501849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.502068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.502311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.502336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.502537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.502782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.502806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.503027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.503232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.503257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.503484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.503674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.503702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.503901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.504104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.504129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.504334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.504530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.001 [2024-04-15 02:04:54.504562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.001 qpair failed and we were unable to recover it. 00:30:09.001 [2024-04-15 02:04:54.504786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.505272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.505686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.505900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.506134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.506374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.506398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.506620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.506881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.506906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.507099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.507315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.507340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.507551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.507751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.507776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.507989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.508182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.508208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.508408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.508602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.508627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.508849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.509054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.509086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.509310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.509530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.509555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.509782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.510031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.510063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.510284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.510516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.510541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.510745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.510978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.511003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.511231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.511430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.511457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.511657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.511874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.511899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.512123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.512329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.512358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.512554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.512776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.512800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.512988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.513191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.513216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.513437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.513631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.513657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.513884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.514083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.514108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.514308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.514510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.514536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.514776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.514997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.515022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.515232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.515430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.515455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.515672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.515871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.515896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.516129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.516372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.516397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.516606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.516804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.516829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.517052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.517280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.517305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.002 [2024-04-15 02:04:54.517503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.517727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.002 [2024-04-15 02:04:54.517754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.002 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.517951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.518183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.518210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.518407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.518629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.518654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.518858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.519329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.519775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.519992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.520259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.520510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.520534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.520726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.520930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.520955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.521152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.521344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.521371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.521567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.521801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.521826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.522027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.522233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.522258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.522453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.522673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.522698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.522926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.523145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.523171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.523369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.523618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.523642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.523894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.524104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.524129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.524363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.524551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.524576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.524773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.524994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.525019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.525251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.525500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.525524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.525718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.525959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.525983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.526208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.526429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.526454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.526686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.526878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.526903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.527117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.527310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.527335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.527560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.527758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.527782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.527990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.528210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.528235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.528469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.528751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.528775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.529022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.529250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.529275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.529501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.529724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.529748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.529937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.530150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.530176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.530366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.530582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.530608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.003 qpair failed and we were unable to recover it. 00:30:09.003 [2024-04-15 02:04:54.530822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.531043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.003 [2024-04-15 02:04:54.531073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.531274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.531492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.531516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.531729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.531923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.531947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.532174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.532420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.532445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.532664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.532878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.532903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.533099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.533342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.533366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.533562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.533840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.533864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.534104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.534330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.534369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.534578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.534826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.534850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.535067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.535317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.535342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.535630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.535853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.535878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.536080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.536303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.536330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.536568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.536753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.536778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.536976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.537209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.537235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.537430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.537627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.537651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.537865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.538085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.538122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.538378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.538710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.538749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.538979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.539203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.539228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.539425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.539643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.539668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.539972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.540255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.540280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.540484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.540714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.540739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.540984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.541213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.541238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.541480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.541678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.541703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.541941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.542170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.542198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.542422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.542641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.542665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.542878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.543079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.543107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.543297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.543498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.543523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.543744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.543978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.544004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.544213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.544474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.544499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.004 qpair failed and we were unable to recover it. 00:30:09.004 [2024-04-15 02:04:54.544723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.004 [2024-04-15 02:04:54.544965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.544990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.545220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.545441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.545464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.545709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.545898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.545922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.546269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.546546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.546570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.546848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.547125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.547150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.547398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.547595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.547621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.547815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.548062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.548092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.548325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.548517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.548543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.548740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.549239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.549705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.549943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.550137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.550362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.550389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.550579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.550774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.550799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.551022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.551235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.551261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.551456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.551676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.551701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.551925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.552129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.552154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.552357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.552597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.552621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.552836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.553297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.553740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.553960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.554155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.554389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.554414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.554654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.554903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.554929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.555129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.555324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.555348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.555570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.555774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.555801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.555998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.556204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.556230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.556426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.556640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.556665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.556888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.557111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.557136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.557356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.557576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.557600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.557844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.558071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.005 [2024-04-15 02:04:54.558096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.005 qpair failed and we were unable to recover it. 00:30:09.005 [2024-04-15 02:04:54.558290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.558534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.558559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.558782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.559247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.559745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.559988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.560213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.560418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.560443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.560654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.560902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.560926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.561162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.561411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.561436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.561653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.561891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.561918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.562112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.562331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.562355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.562600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.562794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.562818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.563034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.563235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.563260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.563451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.563700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.563725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.563944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.564167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.564192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.564398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.564616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.564641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.564885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.565105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.565130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.565347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.565567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.565593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.565822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.566266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.566736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.566994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.567218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.567431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.567457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.567707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.567957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.567982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.568204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.568420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.568444] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.568682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.568932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.568958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.569217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.569419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.569443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.569696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.569917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.006 [2024-04-15 02:04:54.569942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.006 qpair failed and we were unable to recover it. 00:30:09.006 [2024-04-15 02:04:54.570164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.570365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.570396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.570620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.570808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.570834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.571058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.571284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.571310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.571545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.571770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.571794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.571985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.572182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.572209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.572435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.572654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.572678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.572923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.573144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.573169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.573364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.573579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.573605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.573794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.574266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.574694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.574959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.575159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.575359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.575383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.575606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.575796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.575821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.576058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.576280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.576307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.576501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.576746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.576771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.577012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.577240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.577265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.577483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.577706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.577731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.578009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.578232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.578257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.578452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.578675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.578699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.578945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.579170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.579195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.579452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.579644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.579677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.579925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.580135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.580160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.580356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.580598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.580622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.580868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.581318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.581757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.581971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.582204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.582400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.582426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.582665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.582907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.582932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.007 [2024-04-15 02:04:54.583156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.583375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.007 [2024-04-15 02:04:54.583400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.007 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.583651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.583875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.583901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.584155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.584347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.584391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.584643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.584854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.584878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.585129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.585345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.585368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.585634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.585851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.585876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.586100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.586318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.586343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.586538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.586766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.586789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.587005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.587210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.587235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.587458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.587707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.587731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.587947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.588169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.588194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.588419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.588615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.588639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.588856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.589084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.589111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.589343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.589569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.589595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.589784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590027] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.590257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.590722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.590972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.591211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.591432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.591457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.591712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.591962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.591987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.592295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.592516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.592543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.592770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.593260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.593725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.593998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.594260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.594459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.594483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.594702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.594922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.594946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.595151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.595378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.595404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.595656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.595910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.595935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.596141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.596371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.596397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.596614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.596833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.596858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.008 qpair failed and we were unable to recover it. 00:30:09.008 [2024-04-15 02:04:54.597084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.008 [2024-04-15 02:04:54.597308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.597348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.597603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.597836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.597863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.598137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.598364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.598389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.598592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.598890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.598915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.599142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.599369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.599396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.599640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.599866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.599891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.600092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.600284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.600311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.600553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.600784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.600808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.601118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.601340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.601365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.601587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.601821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.601847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.602095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.602340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.602365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.602561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.602784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.602811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.603065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.603265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.603290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.603503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.603721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.603746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.603976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.604206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.604232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.604458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.604686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.604711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.604929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.605176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.605201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.605401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.605599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.605626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.605849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.606071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.606098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.606299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.606545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.606570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.606768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.606979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.607004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.607228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.607497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.607522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.607788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.608039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.608072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.608270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.608526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.608551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.608776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.609012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.609036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.609242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.609493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.609517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.609736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.609999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.610038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.610286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.610508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.610533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.610760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.610997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.611021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.009 qpair failed and we were unable to recover it. 00:30:09.009 [2024-04-15 02:04:54.611253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.009 [2024-04-15 02:04:54.611477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.611502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.611752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.611971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.611996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.612221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.612436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.612460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.612709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.612925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.612950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.613169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.613428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.613453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.613760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.614264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.614709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.614959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.615219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.615452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.615476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.615711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.615930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.615956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.616162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.616361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.616386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.616637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.616835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.616859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.617073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.617358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.617382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.617673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.617868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.617893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.618113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.618340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.618365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.618591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.618850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.618875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.619102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.619308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.619333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.619531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.619753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.619778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.619978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.620197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.620222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.620418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.620644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.620669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.010 [2024-04-15 02:04:54.620974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.621205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.010 [2024-04-15 02:04:54.621230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.010 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.621449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.621694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.621721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.621992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.622201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.622227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.622448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.622646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.622673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.622976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.623263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.623289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.623489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.623705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.623732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.623965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.624185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.624211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.624411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.624653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.624678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.624899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.625121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.625147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.625340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.625644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.625669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.626004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.626260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.626286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.626529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.626762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.626786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.627018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.627248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.627274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.627488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.627711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.627736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.627956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.628206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.628232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.628434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.628679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.628706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.628956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.629175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.282 [2024-04-15 02:04:54.629200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.282 qpair failed and we were unable to recover it. 00:30:09.282 [2024-04-15 02:04:54.629398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.629595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.629622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.629886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.630115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.630141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.630363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.630586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.630612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.630848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.631040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.631070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.631292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.631513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.631537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.631787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.631988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.632013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.632213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.632402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.632427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.632641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.632888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.632928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.633167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.633371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.633397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.633594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.633806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.633831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.634055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.634303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.634328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.634516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.634712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.634737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.635003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.635268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.635295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.635522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.635745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.635770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.635992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.636212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.636237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.636467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.636661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.636685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.636955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.637176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.637204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.637425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.637666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.637691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.637915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.638117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.638143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.638367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.638568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.638593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.638790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.639094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.639120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.639326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.639590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.639615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.639860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.640112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.640138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.640363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.640628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.640653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.640911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.641142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.641178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.641435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.641665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.641690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.641915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.642121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.642147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.642346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.642564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.642588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.283 qpair failed and we were unable to recover it. 00:30:09.283 [2024-04-15 02:04:54.642854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.643058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.283 [2024-04-15 02:04:54.643083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.643302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.643485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.643510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.643730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.643927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.643952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.644177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.644401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.644428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.644669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.644952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.644977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.645181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.645402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.645427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.645712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646049] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.646251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.646735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.646979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.647287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.647537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.647561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.647791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.648036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.648071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.648322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.648545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.648569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.648763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.648982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.649006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.649320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.649553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.649576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.649791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.649990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.650017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.650217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.650435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.650460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.650654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.650897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.650921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.651133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.651334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.651359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.651606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.651848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.651873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.652097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.652409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.652436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.652671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.652874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.652907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.653153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.653393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.653418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.653686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.653897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.653921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.654145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.654368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.654393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.654619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.654901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.654926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.655127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.655350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.655375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.655680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.655940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.655965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.656184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.656406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.656430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.656630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.656865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.656888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.284 qpair failed and we were unable to recover it. 00:30:09.284 [2024-04-15 02:04:54.657105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.284 [2024-04-15 02:04:54.657303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.657330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.657555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.657780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.657809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.658062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.658304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.658328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.658608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.658817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.658844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.659102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.659306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.659331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.659549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.659792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.659817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.660038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.660283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.660308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.660579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.660820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.660845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.661110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.661334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.661358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.661613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.661842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.661867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.662065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.662264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.662289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.662494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.662719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.662748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.663009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.663253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.663279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.663486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.663707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.663732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.663921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.664119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.664145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.664370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.664579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.664605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.664917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.665140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.665165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.665391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.665635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.665660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.665901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.666308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.666761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.666998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.667227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.667448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.667473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.667729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.667931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.667958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.668207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.668436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.668461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.668733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.668954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.668978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.669220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.669470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.669495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.669688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.669942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.669965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.670236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.670447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.670471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.285 qpair failed and we were unable to recover it. 00:30:09.285 [2024-04-15 02:04:54.670704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.670962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.285 [2024-04-15 02:04:54.670986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.671212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.671442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.671469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.671666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.671885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.671911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.672159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.672358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.672400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.672637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.672837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.672861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.673097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.673318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.673344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.673592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.673809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.673834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.674058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.674255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.674280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.674500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.674707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.674731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.674991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.675215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.675241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.675491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.675706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.675731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.675975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.676168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.676194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.676412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.676635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.676661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.676900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.677119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.677144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.677394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.677583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.677607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.677852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.678067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.678092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.678285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.678516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.678540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.678780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.679001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.679026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.679319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.679565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.679590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.679823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.680019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.680052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.680270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.680495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.680520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.680763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.680987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.681012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.681243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.681465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.681490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.681688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.681922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.681947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.682181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.682371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.682395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.682649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.682882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.682908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.286 qpair failed and we were unable to recover it. 00:30:09.286 [2024-04-15 02:04:54.683160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.286 [2024-04-15 02:04:54.683382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.683407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.683598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.683911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.683936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.684169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.684359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.684384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.684608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.684794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.684820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.685038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.685247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.685274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.685497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.685742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.685767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.685957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.686204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.686230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.686436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.686679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.686704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.686992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.687289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.687315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.687554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.687776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.687800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.688029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.688281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.688307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.688541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.688740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.688764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.688983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.689216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.689242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.689489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.689715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.689742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.689969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.690175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.690201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.690384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.690622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.690647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.690838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.691098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.691123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.691364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.691629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.691654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.691872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.692121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.692146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.692344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.692590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.692615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.692835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.693080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.693106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.693333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.693528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.693553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.693771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.693987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.694012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.694207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.694437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.694463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.694672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.694920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.694945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.695157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.695396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.695422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.287 [2024-04-15 02:04:54.695645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.695866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.287 [2024-04-15 02:04:54.695893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.287 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.696096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.696327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.696354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.696579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.696800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.696824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.697070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.697295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.697322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.697571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.697766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.697791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.698011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.698238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.698266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.698481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.698701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.698726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.698972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.699192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.699217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.699436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.699654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.699679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.699875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.700120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.700146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.700348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.700546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.700570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.700789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.701007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.701034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.701301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.701572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.701597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.701842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.702313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.702750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.702999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.703242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.703448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.703472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.703693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.703948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.703973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.704193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.704389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.704414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.704657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.704886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.704910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.705143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.705339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.705365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.705588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.705805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.705830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.706054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.706256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.706282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.706524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.706714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.706740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.706962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.707154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.707179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.707387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.707595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.707619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.707860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.708096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.708123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.708338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.708580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.708605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.708827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.709074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.709100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.288 qpair failed and we were unable to recover it. 00:30:09.288 [2024-04-15 02:04:54.709331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.288 [2024-04-15 02:04:54.709547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.709572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.709795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.710187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.710227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.710470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.710730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.710755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.710981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.711229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.711255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.711478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.711686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.711712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.711957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.712162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.712188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.712383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.712663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.712688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.712960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.713188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.713214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.713434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.713676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.713700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.713923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.714150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.714176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.714402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.714620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.714645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.714868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.715318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.715766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.715992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.716214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.716414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.716439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.716636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.716836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.716862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.717081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.717276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.717301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.717501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.717702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.717727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.717911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.718115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.718142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.718340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.718591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.718616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.718814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.719258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.719741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.719989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.720188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.720409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.720439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.720666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.720925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.720950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.721171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.721371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.721396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.721589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.721786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.721811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.722007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.722267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.722292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.722495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.722689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.722714] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.289 qpair failed and we were unable to recover it. 00:30:09.289 [2024-04-15 02:04:54.722962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.289 [2024-04-15 02:04:54.723173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.723199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.723461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.723649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.723674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.723893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.724093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.724118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.724318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.724568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.724593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.724789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.725278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.725722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.725932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.726139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.726341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.726370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.726621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.726819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.726843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.727040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.727247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.727272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.727479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.727701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.727726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.727921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.728147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.728173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.728415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.728611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.728637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.728839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.729064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.729090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.729313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.729561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.729590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.729836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.730301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.730771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.730986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.731231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.731474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.731499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.731719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.731915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.731940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.732135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.732358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.732383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.732582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.732816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.732840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.733068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.733295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.733319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.733517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.733732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.733757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.733979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.734183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.734213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.734411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.734603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.734627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.734853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.735052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.735077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.735306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.735546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.735570] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.290 qpair failed and we were unable to recover it. 00:30:09.290 [2024-04-15 02:04:54.735764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.290 [2024-04-15 02:04:54.735994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.736019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.736239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.736441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.736466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.736688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.736878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.736905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.737104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.737305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.737330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.737561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.737782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.737807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.738026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.738237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.738264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.738483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.738680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.738705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.738904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.739110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.739136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.739335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.739557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.739583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.739780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.740270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.740690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.740925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.741150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.741345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.741370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.741617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.741816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.741840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.742063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.742257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.742282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.742483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.742732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.742758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.742984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.743181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.743208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.743407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.743595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.743620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.743814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.744280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.744723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.744972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.745192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.745415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.745439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.745684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.745906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.745930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.746154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.746404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.746429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.746648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.746867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.291 [2024-04-15 02:04:54.746892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.291 qpair failed and we were unable to recover it. 00:30:09.291 [2024-04-15 02:04:54.747155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.747351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.747377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.747622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.747879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.747904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.748102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.748304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.748329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.748529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.748753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.748777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.748999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.749225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.749250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.749472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.749735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.749760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.750011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.750233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.750258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.750451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.750646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.750671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.750898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.751132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.751166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.751368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.751577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.751603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.751800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.752262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.752706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.752983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.753248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.753442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.753467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.753664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.753861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.753885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.754105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.754351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.754375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.754596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.754818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.754842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.755098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.755319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.755343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.755591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.755778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.755805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.756002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.756210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.756237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.756478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.756674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.756699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.756914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.757183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.757209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.757418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.757635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.757659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.757845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.758066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.758092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.758292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.758541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.758567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.758757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.759229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.759674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.759897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.292 [2024-04-15 02:04:54.760125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.760326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.292 [2024-04-15 02:04:54.760350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.292 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.760543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.760795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.760819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.761081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.761357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.761383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.761605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.761834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.761859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.762066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.762291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.762318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.762509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.762733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.762757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.762956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.763153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.763179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.763392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.763615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.763640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.763860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.764101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.764127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.764328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.764549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.764574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.764766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.764977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.765004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.765247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.765448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.765472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.765691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.765914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.765939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.766173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.766417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.766442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.766664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.766889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.766914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.767131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.767370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.767394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.767632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.767847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.767872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.768127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.768322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.768347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.768554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.768782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.768807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.769029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.769231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.769256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.769499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.769718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.769744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.770020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.770224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.770251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.770571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.770834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.770874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.771138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.771352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.771377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.771617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.771841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.771868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.772119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.772316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.772341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.772539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.772770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.772794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.773033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.773267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.773292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.773513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.773725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.773750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.293 qpair failed and we were unable to recover it. 00:30:09.293 [2024-04-15 02:04:54.773974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.293 [2024-04-15 02:04:54.774170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.774196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.774417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.774668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.774694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.774917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.775139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.775167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.775395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.775602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.775627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.775807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.776272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.776732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.776992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.777217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.777431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.777457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.777682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.777897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.777922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.778121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.778396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.778421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.778682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.778901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.778927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.779127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.779342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.779366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.779588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.779806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.779832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.780057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.780286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.780312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.780708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.780936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.780960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.781161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.781414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.781439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.781655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.781879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.781904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.782133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.782356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.782382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.782610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.782844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.782868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.783109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.783335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.783361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.783572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.783785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.783811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.784051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.784297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.784322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.784543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.784755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.784780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.785066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.785267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.785292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.785515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.785709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.785734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.785959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.786183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.786209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.786418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.786681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.786706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.786924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.787152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.787178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.294 [2024-04-15 02:04:54.787402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.787625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.294 [2024-04-15 02:04:54.787649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.294 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.787893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.788116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.788142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.788363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.788555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.788580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.788897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.789176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.789202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.789452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.789672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.789697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.789966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.790194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.790219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.790443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.790644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.790668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.790880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.791121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.791147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.791462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.791682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.791706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.791990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.792191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.792217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.792425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.792688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.792713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.792915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.793131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.793156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.793388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.793614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.793639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.793886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.794111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.794137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.794325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.794547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.794572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.794768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.794982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.795007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.795231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.795433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.795460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.795689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.795877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.795908] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.796134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.796356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.796381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.796613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.796809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.796835] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.797021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.797288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.797315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.797561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.797787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.797812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.798007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.798220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.798245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.798486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.798729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.798754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.798940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.799165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.799191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.799389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.799612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.799636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.799857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.800277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.800748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.800993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.295 qpair failed and we were unable to recover it. 00:30:09.295 [2024-04-15 02:04:54.801197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.295 [2024-04-15 02:04:54.801397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.801424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.801628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.801846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.801871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.802116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.802348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.802373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.802598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.802789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.802816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.803018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.803248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.803273] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.803467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.803658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.803683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.803930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.804158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.804185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.804382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.804576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.804601] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.804848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.805069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.805105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.805335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.805535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.805559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.805794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.805984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.806011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.806263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.806477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.806502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.806729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.806951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.806976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.807168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.807416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.807441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.807636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.807852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.807878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.808100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.808320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.808345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.808535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.808760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.808785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.808975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.809195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.809221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.809417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.809631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.809661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.809884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.810105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.810131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.810403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.810686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.810711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.810906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.811136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.811162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.811358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.811553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.811579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.811859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.812078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.812104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.812321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.812544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.296 [2024-04-15 02:04:54.812568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.296 qpair failed and we were unable to recover it. 00:30:09.296 [2024-04-15 02:04:54.812782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.813258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.813703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.813949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.814146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.814362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.814386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.814648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.814853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.814877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.815095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.815295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.815320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.815568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.815796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.815822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.816009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.816254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.816280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.816473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.816726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.816752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.816973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.817172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.817200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.817434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.817655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.817680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.817903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.818151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.818177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.818376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.818601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.818626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.818841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.819073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.819110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.819366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.819568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.819595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.819814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.820013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.820039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.820316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.820517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.820541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.820754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.820976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.821001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.821243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.821428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.821453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.821757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.821985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.822009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.822244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.822465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.822489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.822730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.822949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.822975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.823199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.823435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.823461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.823683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.823908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.823933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.824185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.824373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.824398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.824644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.824868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.824893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.825140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.825358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.825385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.825611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.825827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.825852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.297 qpair failed and we were unable to recover it. 00:30:09.297 [2024-04-15 02:04:54.826063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.826289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.297 [2024-04-15 02:04:54.826315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.826535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.826748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.826774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.827062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.827283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.827308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.827524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.827745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.827769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.828027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.828272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.828297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.828521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.828738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.828762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.829041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.829265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.829290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.829537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.829759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.829785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.830009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.830245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.830270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.830480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.830698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.830724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.830945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.831143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.831170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.831372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.831592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.831618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.831805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.832127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.832152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.832376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.832620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.832645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.832864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.833103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.833128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.833353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.833554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.833581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.833827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.834323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.834730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.834993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.835240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.835439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.835464] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.835692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.835886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.835911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.836113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.836330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.836355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.836603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.836818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.836842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.837071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.837298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.837323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.837525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.837787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.837812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.838031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.838266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.838291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.838521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.838765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.838790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.839009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.839215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.839240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.839437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.839658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.839683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.298 qpair failed and we were unable to recover it. 00:30:09.298 [2024-04-15 02:04:54.839941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.298 [2024-04-15 02:04:54.840186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.840212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.840437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.840656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.840681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.840954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.841176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.841201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.841402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.841624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.841648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.841867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.842125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.842151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.842372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.842598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.842625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.842857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.843050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.843075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.843308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.843587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.843612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.843880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.844174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.844200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.844409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.844632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.844657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.844903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.845122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.845148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.845363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.845606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.845631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.845944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.846142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.846168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.846361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.846647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.846672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.846937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.847136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.847162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.847388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.847609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.847636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.847833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.848057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.848083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.848288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.848585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.848609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.848843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.849072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.849099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.849321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.849544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.849568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.849824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.850073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.850109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.850340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.850563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.850589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.850834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.851057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.851085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.851309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.851521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.851545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.851755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.852006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.852031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.852296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.852499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.852526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.852786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.853257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.853753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.853995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.299 qpair failed and we were unable to recover it. 00:30:09.299 [2024-04-15 02:04:54.854193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.299 [2024-04-15 02:04:54.854409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.854433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.854679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.854926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.854952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.855139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.855367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.855391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.855611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.855816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.855841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.856074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.856310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.856335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.856551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.856837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.856862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.857124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.857360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.857384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.857605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.857826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.857850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.858066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.858296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.858322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.858551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.858775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.858801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.858999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.859228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.859253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.859474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.859718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.859743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.859990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.860209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.860235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.860437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.860640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.860664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.860888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.861146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.861171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.861444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.861714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.861738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.862025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.862238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.862265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.862486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.862738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.862762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.862973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.863199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.863225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.863432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.863644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.863668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.863927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.864181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.864207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.864424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.864673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.864698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.864942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.865214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.865239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.865519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.865761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.865785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.300 qpair failed and we were unable to recover it. 00:30:09.300 [2024-04-15 02:04:54.866012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.300 [2024-04-15 02:04:54.866212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.866237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.866443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.866693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.866718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.866945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.867167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.867193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.867429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.867625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.867650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.867843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.868059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.868086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.868336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.868530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.868554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.868797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.869014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.869038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.869276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.869491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.869516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.869719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.869975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.870014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.870282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.870513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.870538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.870783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.871262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.871741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.871989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.872256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.872481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.872506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.872702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.872895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.872925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.873179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.873405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.873430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.873630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.873929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.873953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.874182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.874396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.874421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.874617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.874810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.874836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.875089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.875284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.875309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.875532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.875755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.875780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.875980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.876175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.876200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.876396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.876593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.876617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.876837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.877084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.877110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.877334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.877531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.877563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.877783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.878018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.878042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.878276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.878494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.878518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.878779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.878997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.879022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.879227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.879450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.879475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.301 qpair failed and we were unable to recover it. 00:30:09.301 [2024-04-15 02:04:54.879716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.301 [2024-04-15 02:04:54.879920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.879944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.880182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.880394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.880418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.880654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.880899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.880924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.881111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.881341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.881368] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.881590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.881813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.881840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.882068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.882271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.882300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.882520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.882795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.882820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.883111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.883358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.883383] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.883604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.883848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.883873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.884117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.884333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.884358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.884580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.884827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.884852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.885059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.885264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.885290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.885578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.885890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.885915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.886133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.886377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.886402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.886650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.886879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.886904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.887151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.887398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.887428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.887654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.887878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.887902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.888146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.888377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.888402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.888626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.888847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.888871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.889158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.889385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.889409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.889657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.889858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.889888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.890133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.890357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.890382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.890610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.890803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.890831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.891027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.891257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.891282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.891503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.891718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.891743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.891965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.892164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.892190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.892417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.892631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.892656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.892875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.893100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.893125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.302 [2024-04-15 02:04:54.893326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.893546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.302 [2024-04-15 02:04:54.893571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.302 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.893817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.894064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.894089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.894339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.894561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.894586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.894805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895051] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.895256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.895712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.895929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.896152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.896400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.896426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.896619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.896867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.896892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.897122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.897373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.897398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.897591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.897812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.897839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.898084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.898308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.898332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.898550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.898792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.898817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.899039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.899247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.899272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.899493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.899690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.899715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.899907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.900134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.900160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.900355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.900574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.900599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.900815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.901241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.901653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.901888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.902127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.902414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.902439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.902748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.902968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.902993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.903196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.903441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.903466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.903696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.903930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.903953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.904184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.904392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.904416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.904648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.904853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.904877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.905090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.905315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.905340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.905582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.905798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.905823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.906120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.906340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.906366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.906565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.906789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.906813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.303 qpair failed and we were unable to recover it. 00:30:09.303 [2024-04-15 02:04:54.907055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.303 [2024-04-15 02:04:54.907255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.907281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.907477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.907695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.907720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.907909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.908125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.908151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.908383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.908603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.908627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.908867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.909098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.909123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.909327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.909588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.909614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.909840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.910290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.910756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.910973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.911265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.911470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.911494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.911732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.911944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.911969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.912186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.912392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.912418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.912636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.912834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.912860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.913052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.913295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.913320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.913566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.913782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.913806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.914108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.914301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.914326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.914551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.914741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.914766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.914999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.915224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.915249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.915494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.915715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.915740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.915963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.916186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.916213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.916411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.916635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.916660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.916880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.917083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.304 [2024-04-15 02:04:54.917109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.304 qpair failed and we were unable to recover it. 00:30:09.304 [2024-04-15 02:04:54.917304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.917540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.917567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.917785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.918278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.918767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.918985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.919209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.919441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.919466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.919683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.919902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.919928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.920152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.920361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.920387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.920624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.920845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.920870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.921070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.921259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.921284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.921506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.921727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.921754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.921978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.922225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.922251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.922488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.922713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.922739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.923018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.923281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.923306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.923535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.923732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.923757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.924009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.924222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.924248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.924512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.924782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.924806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.576 qpair failed and we were unable to recover it. 00:30:09.576 [2024-04-15 02:04:54.925028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.925245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.576 [2024-04-15 02:04:54.925270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.925485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.925717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.925742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.925995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.926219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.926245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.926467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.926664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.926688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.926882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.927100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.927125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.927348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.927564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.927589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.927818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.928038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.928068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.928300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.928563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.928588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.928836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.929028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.929059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.929263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.929489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.929514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.929734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.929999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.930025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.930244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.930523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.930548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.930825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.931051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.931077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.931281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.931500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.931524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.931782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.932000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.932025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.932290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.932486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.932511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.932755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.933001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.933026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.933256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.933527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.933555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.933791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.934088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.934115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.934364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.934592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.934617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.934817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.935266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.935755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.935995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.936197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.936444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.936469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.936696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.936924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.936948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.937201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.937443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.937468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.937696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.937914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.937939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.577 qpair failed and we were unable to recover it. 00:30:09.577 [2024-04-15 02:04:54.938129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.938335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.577 [2024-04-15 02:04:54.938361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.938581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.938786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.938810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.939003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.939222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.939248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.939480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.939679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.939704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.939947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.940196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.940226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.940428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.940650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.940675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.940947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.941170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.941195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.941452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.941670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.941694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.941943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.942192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.942219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.942444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.942641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.942666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.942886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.943111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.943137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.943337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.943530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.943557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.943782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.944276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.944734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.944982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.945217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.945421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.945446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.945644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.945854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.945877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.946066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.946272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.946296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.946531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.946785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.946810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.947062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.947285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.947310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.947505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.947723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.947748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.947971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.948194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.948219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.948440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.948666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.948690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.948909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.949130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.949157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.949386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.949584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.949609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.949832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.950281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.950746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.950991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.578 [2024-04-15 02:04:54.951222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.951453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.578 [2024-04-15 02:04:54.951477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.578 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.951718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.952351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.952380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.952610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.952834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.952859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.953114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.953311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.953336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.953581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.953803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.953827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.954051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.954249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.954275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.954508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.954711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.954735] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.954936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.955130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.955157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.955381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.955639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.955664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.955864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.956118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.956144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.956370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.956595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.956619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.956825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.957279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.957713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.957934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.958160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.958414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.958439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.958661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.958860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.958885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.959086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.959286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.959310] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.959536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.959740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.959766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.959989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.960193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.960217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.960433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.960632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.960656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.960881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.961081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.961105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.961344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.961562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.961587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.961811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.962255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.962696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.962962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.963151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.963372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.963397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.579 qpair failed and we were unable to recover it. 00:30:09.579 [2024-04-15 02:04:54.963613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.963812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.579 [2024-04-15 02:04:54.963836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.964061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.964252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.964280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.964524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.964749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.964773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.965023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.965271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.965296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.965520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.965739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.965763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.965959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.966185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.966210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.966406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.966626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.966651] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.966871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.967093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.967118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.967344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.967572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.967597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.967797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.967993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.968018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.968221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.968437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.968461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.968654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.968861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.968889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.969087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.969287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.969312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.969550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.969802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.969827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.970018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.970220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.970244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.970453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.970646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.970670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.970891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.971120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.971146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.971342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.971537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.971562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.971820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.972260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.972707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.972937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.973135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.973381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.973405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.973654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.973846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.973881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.974133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.974331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.974356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.974573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.974810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.974834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.975065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.975263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.975287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.975506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.975704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.975728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.975949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.976198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.976224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.580 qpair failed and we were unable to recover it. 00:30:09.580 [2024-04-15 02:04:54.976421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.580 [2024-04-15 02:04:54.976633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.976657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.976877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977115] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.977314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.977731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.977953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.978175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.978404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.978429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.978631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.978831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.978856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.979053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.979258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.979282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.979475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.979679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.979703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.979925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.980125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.980151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.980370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.980586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.980610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.980813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.981258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.981680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.981946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.982147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.982337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.982361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.982584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.982807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.982831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.983043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.983245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.983270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.983488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.983681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.983705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.983891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.984118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.984143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.984388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.984648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.984671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.984896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.985119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.985144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.985402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.985645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.985669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.581 [2024-04-15 02:04:54.985867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.986099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.581 [2024-04-15 02:04:54.986124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.581 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.986353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.986579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.986604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.986822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.987266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.987706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.987959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.988164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.988393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.988419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.988640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.988853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.988877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.989071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.989296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.989319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.989515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.989750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.989775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.989992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.990189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.990214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.990449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.990644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.990668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.990893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.991130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.991155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.991374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.991574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.991598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.991809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.992024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.992059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.992287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.992577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.992603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.992807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.993029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.993060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.993281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.993477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.993502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.993764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.993996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.994020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.994220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.994435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.994459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.994683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.994943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.994968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.995239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.995431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.995455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.995679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.995886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.995910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.996106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.996305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.996330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.996528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.996750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.996775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.996978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.997174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.997199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.997414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.997610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.997635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.997831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.998242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.998702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.998945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.582 qpair failed and we were unable to recover it. 00:30:09.582 [2024-04-15 02:04:54.999145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.582 [2024-04-15 02:04:54.999365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:54.999390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:54.999594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:54.999810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:54.999833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.000031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.000261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.000286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.000504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.000695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.000721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.000946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.001141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.001166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.001365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.001581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.001605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.001856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.002298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.002740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.002963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.003165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.003361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.003385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.003620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.003836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.003861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.004082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.004273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.004297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.004514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.004742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.004766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.004986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.005187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.005212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.005413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.005633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.005657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.005857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.006125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.006150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.006353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.006576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.006600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.006858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.007102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.007127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.007357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.007583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.007608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.007823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.008050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.008075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.008295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.008539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.008564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.008766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.008984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.009008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.009239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.009461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.009485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.009732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.009975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.009999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.010215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.010438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.010462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.010654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.010875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.010900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.011136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.011344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.011369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.583 [2024-04-15 02:04:55.011561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.011755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.583 [2024-04-15 02:04:55.011779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.583 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.011990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.012207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.012233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.012457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.012652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.012677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.012893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.013114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.013139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.013363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.013557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.013581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.013811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.014030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.014062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.014291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.014487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.014511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.014743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.014978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.015003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.015197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.015387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.015416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.015655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.015858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.015896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.016181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.016431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.016456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.016718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.016909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.016934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.017132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.017325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.017350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.017569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.017789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.017813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.018011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.018270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.018295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.018497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.018694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.018718] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.018930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.019123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.019151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.019387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.019635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.019660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.019878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.020296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.020754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.020996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.021184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.021385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.021409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.021605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.021869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.021893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.022088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.022322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.022346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.022539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.022729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.022754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.022949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.023188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.023213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.023412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.023646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.023672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.023874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.024101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.024125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.024325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.024542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.024567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.584 qpair failed and we were unable to recover it. 00:30:09.584 [2024-04-15 02:04:55.024768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.584 [2024-04-15 02:04:55.024986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.025010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.025255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.025471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.025496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.025729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.025948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.025972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.026175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.026387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.026413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.026632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.026827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.026852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.027055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.027250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.027276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.027471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.027694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.027721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.027918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.028144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.028170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.028361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.028585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.028609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.028835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.029283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.029746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.029991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.030190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.030422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.030447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.030666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.030883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.030907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.031108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.031306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.031330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.031575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.031795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.031821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.032066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.032299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.032323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.032595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.032809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.032836] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.033086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.033274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.033298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.033500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.033716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.033742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.033937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.034161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.034186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.034385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.034615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.034639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.034895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.035117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.035143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.035343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.035554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.035580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.035823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.036073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.036099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.585 qpair failed and we were unable to recover it. 00:30:09.585 [2024-04-15 02:04:55.036297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.036551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.585 [2024-04-15 02:04:55.036576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.036827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.037054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.037080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.037305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.037555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.037580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.037842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.038057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.038083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.038282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.038533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.038558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.038803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.039027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.039061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.039286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.039504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.039529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.039750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.039992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.040017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.040251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.040468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.040492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.040719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.040911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.040936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.041156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.041350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.041375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.041624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.041822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.041847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.042069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.042263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.042288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.042505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.042752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.042777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.042997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.043199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.043224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.043429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.043640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.043665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.043898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.044106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.044131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.044326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.044541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.044567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.044808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.045299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.045735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.045953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.046177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.046374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.046398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.046666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.046923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.046948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.047175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.047406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.047430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.047660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.047879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.047903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.586 qpair failed and we were unable to recover it. 00:30:09.586 [2024-04-15 02:04:55.048129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.586 [2024-04-15 02:04:55.048375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.048400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.048666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.048860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.048887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.049136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.049338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.049362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.049590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.049785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.049810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.050031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.050263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.050289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.050486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.050685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.050709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.050899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.051115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.051141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.051368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.051556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.051580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.051838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.052281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.052747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.052969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.053227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.053470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.053494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.053688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.053920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.053944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.054186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.054380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.054404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.054626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.054852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.054876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.055099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.055321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.055346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.055569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.055788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.055814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.056103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.056357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.056382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.056605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.056799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.056824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.057055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.057287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.057312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.057534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.057767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.057790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.058036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.058269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.058294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.058481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.058728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.058751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.058973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.059218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.059243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.059465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.059687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.059711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.059931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.060153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.060178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.060404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.060636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.060659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.060870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.061066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.061091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.061308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.061508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.587 [2024-04-15 02:04:55.061532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.587 qpair failed and we were unable to recover it. 00:30:09.587 [2024-04-15 02:04:55.061731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.061977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.062001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.062194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.062436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.062460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.062710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.062930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.062958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.063192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.063417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.063442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.063662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.063920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.063944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.064170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.064387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.064412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.064613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.064837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.064861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.065083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.065310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.065334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.065550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.065769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.065794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.066015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.066219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.066244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.066442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.066683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.066708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.066913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.067132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.067158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.067424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.067624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.067653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.067870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.068096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.068123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.068339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.068537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.068563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.068782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.069290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.069749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.069985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.070231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.070454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.070479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.070699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.070897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.070923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.071128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.071351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.071376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.071600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.071790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.071816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.072037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.072264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.072289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.072516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.072749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.072774] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.072990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.073211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.073236] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.073428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.073624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.073649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.073881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.074073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.074099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.588 qpair failed and we were unable to recover it. 00:30:09.588 [2024-04-15 02:04:55.074323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.074517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.588 [2024-04-15 02:04:55.074542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.074776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.075253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.075712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.075958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.076187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.076409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.076433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.076702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.076952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.076977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.077183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.077383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.077409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.077632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.077823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.077848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.078068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.078313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.078337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.078558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.078755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.078779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.079026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.079288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.079313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.079540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.079758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.079782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.079999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.080196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.080221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.080471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.080715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.080739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.080954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.081215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.081240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.081494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.081699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.081724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.081944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.082174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.082200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.082422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.082646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.082670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.082920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.083140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.083165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.083389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.083587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.083611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.083830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.084054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.084079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.084293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.084509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.084534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.084781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.085032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.085064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.085317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.085537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.085561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.085783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.086008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.086031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.086247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.086504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.086529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.086779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.087270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.087697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.589 [2024-04-15 02:04:55.087943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.589 qpair failed and we were unable to recover it. 00:30:09.589 [2024-04-15 02:04:55.088160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.088404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.088428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.088611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.088853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.088878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.089078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.089310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.089335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.089585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.089775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.089800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.090020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.090279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.090304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.090521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.090744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.090767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.090959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.091203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.091229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.091456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.091680] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.091709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.091918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.092110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.092135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.092370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.092584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.092608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.092851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.093085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.093109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.093310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.093532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.093557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.093783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.093975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.094000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.094219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.094471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.094495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.094716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.094936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.094961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.095184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.095390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.095415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.095612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.095871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.095895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.096124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.096354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.096378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.096578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.096824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.096849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.097088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.097315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.097339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.097563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.097759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.097784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.098040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.098258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.098284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.098546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.098768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.098793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.099009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.099242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.590 [2024-04-15 02:04:55.099268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.590 qpair failed and we were unable to recover it. 00:30:09.590 [2024-04-15 02:04:55.099455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.099673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.099697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.099919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.100138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.100165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.100388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.100618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.100643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.100863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.101079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.101112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.101344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.101570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.101595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.101818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.102014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.102038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.102253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.102500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.102538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.102848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.103073] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.103098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.103346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.103590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.103615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.103817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.104119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.104144] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.104415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.104636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.104661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.104927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.105142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.105167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.105392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.105583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.105607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.105854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.106078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.106110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.106332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.106550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.106575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.106798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.107015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.107039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.107302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.107575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.107599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.107848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.108068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.108094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.108345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.108569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.108593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.108839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.109092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.109117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.109353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.109573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.109598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.109798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.110080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.110107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.110331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.110538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.110561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.110784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.111008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.111032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.111296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.111541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.111567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.111772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.112071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.112097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.112296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.112507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.112531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.112793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.112991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.591 [2024-04-15 02:04:55.113015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.591 qpair failed and we were unable to recover it. 00:30:09.591 [2024-04-15 02:04:55.113248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.113469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.113493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.113721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.113914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.113938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.114269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.114519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.114543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.114769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.114964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.114988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.115248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.115541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.115566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.115815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.116023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.116067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.116304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.116509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.116539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.116795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.117013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.117038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.117302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.117529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.117553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.117777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.117985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.118008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.118248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.118492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.118517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.118712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.118907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.118931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.119157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.119398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.119423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.119643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.119928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.119952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.120153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.120388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.120412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.120677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.120923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.120948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.121224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.121440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.121465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.121668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.121913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.121937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.122168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.122373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.122397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.122647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.122846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.122870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.123089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.123311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.123335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.123558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.123754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.123778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.123999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.124245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.124270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.124516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.124705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.124729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.124947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.125196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.125222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.125451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.125673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.125698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.592 [2024-04-15 02:04:55.125921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.126116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.592 [2024-04-15 02:04:55.126143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.592 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.126376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.126599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.126624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.126869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.127091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.127116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.127341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.127579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.127604] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.127830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.128075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.128107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.128314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.128537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.128561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.128783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.129001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.129026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.129265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.129505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.129529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.129765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.130280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.130750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.130995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.131221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.131467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.131492] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.131692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.131909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.131933] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.132152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.132372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.132397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.132594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.132813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.132838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.133038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.133260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.133284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.133484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.133674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.133698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.133918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.134109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.134134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.134383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.134586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.134613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.134830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.135057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.135083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.135290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.135487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.135511] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.135767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.136252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.136715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.136937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.137160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.137396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.137421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.137669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.137895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.137920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.138138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.138382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.138407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.138631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.138873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.138898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.139117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.139342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.139367] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.593 qpair failed and we were unable to recover it. 00:30:09.593 [2024-04-15 02:04:55.139591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.593 [2024-04-15 02:04:55.139816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.139841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.140100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.140349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.140373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.140584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.140787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.140816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.141061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.141295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.141319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.141514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.141713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.141738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.141934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.142139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.142166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.142416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.142639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.142664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.142879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.143126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.143151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.143367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.143564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.143588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.143807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.144302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.144771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.144991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.145227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.145429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.145457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.145681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.145925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.145950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.146171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.146373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.146397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.146589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.146775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.146799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.147020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.147228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.147253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.147477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.147675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.147699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.147920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.148169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.148194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.148394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.148640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.148663] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.148861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.149082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.149107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.149321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.149511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.149536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.149751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.149998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.150022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.150266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.150514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.150539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.150787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.151288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.151744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.151961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.152178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.152402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.152427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.594 qpair failed and we were unable to recover it. 00:30:09.594 [2024-04-15 02:04:55.152646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.594 [2024-04-15 02:04:55.152875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.152900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.153100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.153288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.153312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.153536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.153783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.153807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.154028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.154291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.154316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.154539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.154735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.154761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.154991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.155207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.155232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.155473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.155689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.155715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.155949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.156190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.156215] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.156447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.156665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.156690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.156883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.157108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.157133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.157328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.157552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.157576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.157775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.158256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.158731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.158947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.159199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.159399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.159424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.159645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.159847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.159872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.160091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.160320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.160345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.160592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.160809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.160834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.161052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.161258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.161283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.161507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.161701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.161726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.161971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.162218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.162243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.162442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.162687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.162711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.162933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.163125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.163149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.163345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.163584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.163608] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.595 qpair failed and we were unable to recover it. 00:30:09.595 [2024-04-15 02:04:55.163848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.595 [2024-04-15 02:04:55.164115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.164139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.164338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.164532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.164557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.164780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.164998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.165022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.165277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.165474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.165499] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.165746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.165946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.165970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.166200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.166452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.166476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.166721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.166963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.166987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.167207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.167407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.167433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.167677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.167901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.167926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.168148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.168348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.168374] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.168566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.168785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.168809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.169006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.169199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.169230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.169487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.169743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.169767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.170016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.170247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.170272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.170494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.170686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.170710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.170966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.171161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.171185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.171411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.171633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.171657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.171839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.172067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.172092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.172283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.172508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.172533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.172757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.172995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.173020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.173248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.173469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.173493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.173709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.173925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.173949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.174173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.174430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.174455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.174704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.174902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.174927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.175139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.175374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.175399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.175627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.175844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.175870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.176090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.176295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.176318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.596 [2024-04-15 02:04:55.176546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.176762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.596 [2024-04-15 02:04:55.176787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.596 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.177007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.177267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.177292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.177518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.177823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.177864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.178112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.178394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.178428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.178691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.178939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.178974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.179233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.179495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.179541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.179825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.180115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.180155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.180381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.180585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.180610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.180828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.181074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.181099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.181299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.181495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.181521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.181761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.181981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.182006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.182218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.182444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.182469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.182661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.182879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.182902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.183101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.183322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.183348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.183569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.183789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.183815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.184036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.184274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.184299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.184555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.184739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.184763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.184980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.185227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.185252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.185447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.185670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.185694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.185916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.186147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.186172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.186394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.186595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.186620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.186828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.187049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.187074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.187299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.187512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.187536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.187756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.187979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.188004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.188264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.188489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.188514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.188759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.188959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.188983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.189208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.189430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.189454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.189674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.189880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.189903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.190165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.190360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.190384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.597 [2024-04-15 02:04:55.190603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.190818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.597 [2024-04-15 02:04:55.190843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.597 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.191068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.191291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.191316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.191530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.191754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.191778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.192000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.192195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.192220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.192423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.192617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.192641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.192867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.193108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.193132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.193356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.193600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.193628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.193877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.194077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.194101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.194331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.194577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.194602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.194819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.195273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.195685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.195934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.196132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.196382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.196406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.196596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.196842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.196866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.197062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.197282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.197306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.197508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.197726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.197750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.197947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.198166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.198191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.198418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.198633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.198658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.198880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.199325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.199748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.199997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.200249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.200465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.200489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.200709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.200924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.200948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.201145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.201369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.201394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.201596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.201791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.201816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.202010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.202234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.202258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.202476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.202696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.202720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.202944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.203142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.203167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.598 qpair failed and we were unable to recover it. 00:30:09.598 [2024-04-15 02:04:55.203418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.598 [2024-04-15 02:04:55.203637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.203661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.203860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.204122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.204147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.204368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.204615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.204640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.204861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.205302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.205716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.205939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.206147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.206345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.206369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.206588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.206806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.206831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.207079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.207325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.207349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.207581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.207799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.207824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.208051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.208272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.208297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.208516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.208761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.208786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.208973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.209165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.209189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.209387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.209607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.209632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.209879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.210121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.210146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.210367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.210631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.210655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.210850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.211044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.599 [2024-04-15 02:04:55.211072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.599 qpair failed and we were unable to recover it. 00:30:09.599 [2024-04-15 02:04:55.211297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.211526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.211554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.868 qpair failed and we were unable to recover it. 00:30:09.868 [2024-04-15 02:04:55.211774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.211993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.212018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.868 qpair failed and we were unable to recover it. 00:30:09.868 [2024-04-15 02:04:55.212236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.212493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.868 [2024-04-15 02:04:55.212517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.868 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.212714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.212903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.212929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.213146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.213366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.213390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.213631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.213851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.213876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.214092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.214322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.214347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.214567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.214792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.214816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.215033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.215265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.215290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.215516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.215769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.215793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.216060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.216258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.216283] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.216474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.216675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.216698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.216917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.217121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.217152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.217377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.217603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.217628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.217824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.218254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.218768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.218992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.219186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.219388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.219412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.219608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.219798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.219822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.220020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.220252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.220277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.220503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.220724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.220750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.220995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.221242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.221267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.221467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.221667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.221691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.221910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.222109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.222136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.222360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.222573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.222597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.222820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.223240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.223714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.223999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.224187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.224385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.224413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.224637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.224845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.224881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.225117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.225326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.225351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.869 qpair failed and we were unable to recover it. 00:30:09.869 [2024-04-15 02:04:55.225574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.869 [2024-04-15 02:04:55.225817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.225841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.226097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.226320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.226356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.226632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.226899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.226929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.227139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.227386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.227411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.227631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.227829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.227854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.228080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.228323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.228360] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.228604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.228847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.228883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.229145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.229357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.229384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.229608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.229839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.229873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.230096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.230341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.230375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.230617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.230831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.230868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.231123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.231341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.231376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.231621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.231829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.231854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.232090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.232301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.232337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.232552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.232771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.232807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.233059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.233335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.233362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.233610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.233802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.233827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.234017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.234232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.234257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.234496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.234698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.234733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.234955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.235168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.235204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.235441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.235683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.235711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.235898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.236144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.236170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.236388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.236589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.236614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.236866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.237087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.237123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.237352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.237595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.237624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.237829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.238266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.238689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.238926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.239166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.239366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.239393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.870 qpair failed and we were unable to recover it. 00:30:09.870 [2024-04-15 02:04:55.239594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.870 [2024-04-15 02:04:55.239818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.239843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.240055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.240290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.240315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.240538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.240735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.240760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.240981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.241214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.241240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.241438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.241682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.241708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.241931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.242177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.242203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.242422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.242661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.242686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.242935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.243159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.243186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.243382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.243606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.243631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.243850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.244327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.244767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.244988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.245212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.245408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.245433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.245682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.245919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.245944] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.246169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.246406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.246430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.246627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.246847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.246873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.247094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.247318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.247343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.247589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.247810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.247834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.248036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.248246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.248272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.248498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.248697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.248721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.248963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.249181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.249208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.249405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.249631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.249656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.249902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.250118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.250143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.250338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.250536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.250568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.250761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.250987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.251012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.251223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.251448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.251472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.251667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.251922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.251946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.252145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.252381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.252406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.252633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.252897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.871 [2024-04-15 02:04:55.252922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.871 qpair failed and we were unable to recover it. 00:30:09.871 [2024-04-15 02:04:55.253124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.253326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.253353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.253579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.253802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.253827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.254055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.254253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.254279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.254482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.254677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.254702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.254894] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.255087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.255117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.255313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.255540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.255565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.255808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.256001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.256026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Write completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 Read completed with error (sct=0, sc=8) 00:30:09.872 starting I/O failed 00:30:09.872 [2024-04-15 02:04:55.256394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:09.872 [2024-04-15 02:04:55.256616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.256833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.256861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.257068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.257276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.257313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.257535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.257750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.257792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.258042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.258253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.258279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.258524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.258724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.258750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.258942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.259169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.259205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.259423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.259669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.259705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.259975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.260222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.260250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.260452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.260694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.260719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.260934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.261161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.261186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.261412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.261676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.261711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.261948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.262153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.262179] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.262399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.262639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.262676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.872 [2024-04-15 02:04:55.262899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.263121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.872 [2024-04-15 02:04:55.263158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.872 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.263415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.263629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.263665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.263928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.264177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.264205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.264475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.264668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.264694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.264909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.265129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.265167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.265388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.265632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.265665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.265916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.266121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.266149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.266399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.266610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.266645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.266867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.267115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.267151] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.267400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.267673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.267701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.267896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.268117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.268142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.268336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.268551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.268587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.268831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.269057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.269093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.269337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.269560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.269584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.269808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.270069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.270106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.270327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.270552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.270587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.270866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.271094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.271120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.271319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.271540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.271564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.271850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.272060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.272095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.272346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.272615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.272646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.272860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.273081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.273106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.273306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.273550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.273585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.273809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.274040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.274212] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.274442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.274665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.274692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.873 qpair failed and we were unable to recover it. 00:30:09.873 [2024-04-15 02:04:55.274900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.873 [2024-04-15 02:04:55.275115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.275152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.275369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.275587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.275621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.275838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276089] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.276284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.276734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.276979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.277171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.277417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.277442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.277666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.277928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.277952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.278151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.278353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.278380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.278576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.278803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.278828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.279043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.279253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.279277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.279505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.279721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.279745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.279942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.280146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.280173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.280400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.280597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.280624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.280852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.281117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.281143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.281392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.281605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.281630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.281855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.282104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.282129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.282351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.282567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.282597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.282837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.283058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.283086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.283310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.283557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.283583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.283805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.283999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.284024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.284221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.284428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.284452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.284672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.284921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.284946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.285192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.285420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.285445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.285664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.285885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.285910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.286152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.286350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.286375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.286622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.286808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.286833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.287056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.287255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.287287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.287515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.287737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.287761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.287981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.288196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.288221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.874 qpair failed and we were unable to recover it. 00:30:09.874 [2024-04-15 02:04:55.288417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.874 [2024-04-15 02:04:55.288634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.288659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.288878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.289079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.289105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.289352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.289573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.289599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.289842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.290030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.290062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.290282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.290501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.290526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.290787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.290979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.291003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.291213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.291408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.291432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.291651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.291879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.291903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.292105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.292304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.292329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.292573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.292794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.292818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.293064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.293264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.293290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.293478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.293673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.293698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.293888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.294141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.294166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.294412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.294608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.294633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.294819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.295066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.295091] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.295308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.295521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.295546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.295764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.295990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.296015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.296235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.296433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.296457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.296683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.296896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.296920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.297114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.297330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.297355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.297546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.297742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.297766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.297984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.298217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.298242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.298432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.298614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.298638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.298861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.299072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.299098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.299351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.299573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.299598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.299792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.300286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.300780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.300995] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.301223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.301475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.301500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.875 qpair failed and we were unable to recover it. 00:30:09.875 [2024-04-15 02:04:55.301746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.301944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.875 [2024-04-15 02:04:55.301968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.302188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.302383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.302407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.302622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.302842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.302866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.303083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.303298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.303323] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.303533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.303758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.303782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.304009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.304217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.304243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.304464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.304718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.304744] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.304961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.305205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.305229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.305463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.305677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.305702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.305941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.306174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.306200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.306389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.306614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.306641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.306891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.307117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.307143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.307370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.307619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.307643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.307870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.308081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.308105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.308334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.308528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.308554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.308797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.309043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.309074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.309335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.309523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.309548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.309841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.310322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.310766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.310997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.311255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.311453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.311478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.311696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.311890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.311915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.312117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.312319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.312343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.312569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.312783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.312807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.313021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.313229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.313254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.313477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.313690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.313715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.313905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.314150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.314176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.314401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.314621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.314646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.314844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.315094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.315119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.876 qpair failed and we were unable to recover it. 00:30:09.876 [2024-04-15 02:04:55.315342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.315554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.876 [2024-04-15 02:04:55.315578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.315775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.316085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.316110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.316303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.316508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.316534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.316771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.316989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.317014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.317264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.317514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.317538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.317815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.318011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.318035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.318265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.318524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.318548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.318767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.318995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.319018] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.319269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.319489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.319514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.319738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.320013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.320038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.320309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.320557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.320581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.320807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.321234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.321700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.321948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.322153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.322375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.322399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.322642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.322847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.322872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.323132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.323392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.323417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.323642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.323863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.323887] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.324148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.324398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.324423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.324613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.324835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.324860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.325106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.325318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.325343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.325572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.325789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.325814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.326016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.326221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.326246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.326439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.326656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.326681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.326876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.327127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.327152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.327456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.327699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.327723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.327922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.328163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.328188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.328388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.328628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.328668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.328932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.329157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.877 [2024-04-15 02:04:55.329182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.877 qpair failed and we were unable to recover it. 00:30:09.877 [2024-04-15 02:04:55.329407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.329646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.329670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.329895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.330140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.330165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.330385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.330658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.330703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.330973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.331179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.331207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.331422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.331646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.331671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.331899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.332156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.332185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.332408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.332688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.332737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.333005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.333268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.333295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.333519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.333737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.333764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.334009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.334254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.334281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.334552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.334943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.334992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.335243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.335490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.335514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.335713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.335938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.335970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.336215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.336438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.336467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.336712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.337012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.337062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.337334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.337568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.337593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.337810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.338027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.338060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.338313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.338620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.338675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.338924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.339142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.339167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.339399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.339595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.339620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.339867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.340133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.340161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.340382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.340713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.340764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.340998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.341234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.341264] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.341497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.341715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.341742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.878 [2024-04-15 02:04:55.342027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.342281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.878 [2024-04-15 02:04:55.342306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.878 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.342528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.342773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.342800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.343059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.343279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.343307] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.343618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.344010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.344066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.344316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.344616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.344640] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.344844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.345051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.345077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.345364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.345771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.345826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.346093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.346315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.346340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.346624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.346840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.346864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.347119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.347361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.347388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.347620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.347910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.347959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.348224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.348501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.348525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.348735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.348972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.348999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.349235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.349500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.349527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.349796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.350053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.350078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.350353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.350710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.350767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.351011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.351244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.351270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.351463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.351713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.351740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.352012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.352268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.352296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.352559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.352803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.352844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.353088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.353299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.353326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.353599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.353843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.353868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.354105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.354325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.354349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.354605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.354903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.354943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.355150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.355407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.355434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.355659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.355862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.355885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.356113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.356342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.356369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.356636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.356967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.357025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.357310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.357642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.357701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.879 [2024-04-15 02:04:55.357968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.358189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.879 [2024-04-15 02:04:55.358217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.879 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.358463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.358840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.358891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.359145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.359367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.359395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.359622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.359922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.359962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.360211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.360575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.360620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.360883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.361133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.361160] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.361406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.361696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.361764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.361990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.362226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.362254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.362496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.362707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.362730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.362991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.363231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.363260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.363510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.363725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.363757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.364017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.364311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.364339] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.364672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.364940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.364986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.365238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.365553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.365602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.365845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.366086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.366111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.366367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.366632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.366659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.366903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.367185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.367209] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.367434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.367672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.367712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.367930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.368171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.368200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.368471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.368825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.368874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.369149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.369445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.369476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.369751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.370011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.370038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.370315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.370631] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.370678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.370919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.371148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.371177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.371422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.371647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.371672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.371931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.372206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.372231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.372463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.372822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.372886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.373149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.373428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.373456] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.373752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.374024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.374072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.880 qpair failed and we were unable to recover it. 00:30:09.880 [2024-04-15 02:04:55.374301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.374524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.880 [2024-04-15 02:04:55.374548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.374856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.375128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.375152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.375457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.375937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.375985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.376238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.376493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.376520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.376771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.377011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.377056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.377336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.377563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.377587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.377865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.378114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.378141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.378401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.378754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.378781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.378996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.379246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.379274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.379728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.380042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.380076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.380315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.380615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.380638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.380863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.381154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.381182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.381459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.381889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.381939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.382187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.382430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.382458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.382730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.383043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.383088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.383336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.383566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.383590] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.383855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.384107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.384132] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.384423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.384661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.384685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.384910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.385190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.385218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.385465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.385774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.385798] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.386035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.386301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.386328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.386622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.386925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.386975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.387223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.387508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.387554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.387768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.388022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.388073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.388348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.388822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.388870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.389115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.389363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.389403] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.389689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.389984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.390009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.390243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.390569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.390614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.390859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.391138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.391166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.881 [2024-04-15 02:04:55.391447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.391743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.881 [2024-04-15 02:04:55.391767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.881 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.392057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.392330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.392354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.392643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.393104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.393133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.393380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.393763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.393787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.394044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.394272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.394299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.394536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.394755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.394780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.394992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.395200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.395225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.395430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.395697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.395724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.396135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.396380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.396409] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.396681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.396980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.397053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.397305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.397539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.397563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.397821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.398041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.398076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.398346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.398652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.398697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.398949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.399175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.399203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.399435] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.399785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.399832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.400079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.400354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.400381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.400636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.400955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.401010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.401266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.401701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.401751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.402020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.402281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.402308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.402576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.403096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.403124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.403371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.403614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.403641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.403907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.404178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.404206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.404492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.404719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.404748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.404990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.405182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.405207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.405431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.405656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.405683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.882 qpair failed and we were unable to recover it. 00:30:09.882 [2024-04-15 02:04:55.405925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.882 [2024-04-15 02:04:55.406196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.406224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.406469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.406719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.406746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.407014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.407259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.407287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.407518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.407769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.407793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.408115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.408392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.408419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.408685] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.408895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.408920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.409111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.409333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.409358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.409669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.409887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.409917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.410163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.410597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.410647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.410965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.411252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.411280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.411555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.411957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.412007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.412282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.412679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.412731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.412950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.413201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.413229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.413476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.413719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.413759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.414031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.414238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.414265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.414506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.414754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.414799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.415068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.415275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.415299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.415524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.416010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.416067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.416323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.416636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.416674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.416987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.417217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.417242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.417495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.417851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.417913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.418187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.418442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.418469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.418742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.419129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.419157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.419533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.420025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.420085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.420365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.420589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.420614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.420857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.421111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.421137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.421397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.421628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.421655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.421939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.422177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.422205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.422447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.422904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.422955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.883 qpair failed and we were unable to recover it. 00:30:09.883 [2024-04-15 02:04:55.423241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.883 [2024-04-15 02:04:55.423617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.423675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.423913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.424187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.424213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.424467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.424862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.424910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.425162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.425405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.425449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.425732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.426101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.426129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.426392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.426617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.426647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.427106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.427365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.427392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.427619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.427866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.427894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.428119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.428325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.428349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.428628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.428999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.429075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.429324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.429573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.429625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.429898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.430219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.430247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.430489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.430843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.430899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.431155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.431441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.431466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.431699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.432058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.432087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.432341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.432586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.432629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.432861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.433112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.433139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.433358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.433629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.433656] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.433964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.434214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.434241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.434490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.434970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.435017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.435281] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.435555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.435582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.435852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.436100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.436128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.436373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.436840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.436891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.437171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.437417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.437445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.437715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.438011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.438036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.438320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.438529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.438556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.438797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.439038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.439082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.439351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.439800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.439846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.440112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.440408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.884 [2024-04-15 02:04:55.440436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.884 qpair failed and we were unable to recover it. 00:30:09.884 [2024-04-15 02:04:55.440687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.440955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.440980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.441228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.441669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.441720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.441982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.442204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.442231] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.442478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.442904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.442953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.443208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.443452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.443479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.443849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.444155] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.444183] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.444423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.444928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.444978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.445209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.445536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.445582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.445828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.446068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.446096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.446343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.446727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.446782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.447025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.447301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.447329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.447559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.447967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.448015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.448309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.448668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.448692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.449104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.449355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.449382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.449629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.450072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.450128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.450352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.450634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.450661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.450905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.451242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.451270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.451696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.452012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.452039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.452316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.452759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.452786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.452997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.453260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.453289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.453540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.453786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.453813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.454087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.454306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.454333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.454574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.454908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.454974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.455216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.455469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.455497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.455748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 2288005 Killed "${NVMF_APP[@]}" "$@" 00:30:09.885 [2024-04-15 02:04:55.456180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.456217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 02:04:55 -- host/target_disconnect.sh@56 -- # disconnect_init 10.0.0.2 00:30:09.885 [2024-04-15 02:04:55.456474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 02:04:55 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:09.885 02:04:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:09.885 [2024-04-15 02:04:55.456881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.456938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 02:04:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:09.885 [2024-04-15 02:04:55.457189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 02:04:55 -- common/autotest_common.sh@10 -- # set +x 00:30:09.885 [2024-04-15 02:04:55.457599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.457650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.885 [2024-04-15 02:04:55.457898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.458143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.885 [2024-04-15 02:04:55.458170] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.885 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.458391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.458684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.458712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.459102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.459372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.459397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.459668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.459906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.459931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.460205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.460653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.460703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.460945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.461188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.461216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 02:04:55 -- nvmf/common.sh@469 -- # nvmfpid=2288701 00:30:09.886 02:04:55 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:09.886 [2024-04-15 02:04:55.461468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 02:04:55 -- nvmf/common.sh@470 -- # waitforlisten 2288701 00:30:09.886 02:04:55 -- common/autotest_common.sh@819 -- # '[' -z 2288701 ']' 00:30:09.886 [2024-04-15 02:04:55.461774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.461824] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 02:04:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:09.886 [2024-04-15 02:04:55.462118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 02:04:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:09.886 02:04:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:09.886 [2024-04-15 02:04:55.462372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 02:04:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:09.886 [2024-04-15 02:04:55.462399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 02:04:55 -- common/autotest_common.sh@10 -- # set +x 00:30:09.886 [2024-04-15 02:04:55.462647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.462968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.463013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.463267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.463491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.463517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.463779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.463980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.464006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.464239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.464500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.464544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.464775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.464977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.465001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.465229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.465417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.465443] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.465669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.465886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.465912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.466119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.466344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.466369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.466604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.466837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.466874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.467148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.467390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.467424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.467670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.467873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.467900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.468125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.468326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.468363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.468591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.468806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.468842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.469096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.469314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.469342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.469565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.469791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.469821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.470051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.470251] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.470287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.470559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.470775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.470810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.471059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.471284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.471321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.471570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.471844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.471881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.886 qpair failed and we were unable to recover it. 00:30:09.886 [2024-04-15 02:04:55.472101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.886 [2024-04-15 02:04:55.472296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.472322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.472539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.472742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.472778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.473028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.473305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.473341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.473573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.473805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.473842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.474066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.474273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.474309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.474535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.474773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.474810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.475097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.475334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.475362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.475552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.475779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.475804] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.476001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.476255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.476292] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.476521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.476726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.476762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.476977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.477203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.477228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.477449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.477686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.477722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.477974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.478197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.478234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.478486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.478694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.478719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.478940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.479139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.479174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.479397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.479632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.479669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.479922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.480176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.480204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.480403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.480692] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.480727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.480943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.481184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.481220] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.481474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.481773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.481800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.482021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.482226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.482252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.482507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.482750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.482787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.483035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.483263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.483291] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.483512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.483735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.483760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.483981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.484186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.484211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.887 qpair failed and we were unable to recover it. 00:30:09.887 [2024-04-15 02:04:55.484403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.484620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.887 [2024-04-15 02:04:55.484658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.484907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.485183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.485222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.485494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.485712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.485748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.486015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.486220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.486246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.486499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.486733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.486770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.487019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.487280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.487316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.487596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.487818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.487855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.488091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.488291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.488315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.488562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.488822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.488846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.489095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.489298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.489334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.489553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.489797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.489834] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.490085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.490299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.490337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.490613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.490820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.490856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.491123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.491333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.491361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.491563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.491816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.491841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.492066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.492306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.492342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.492562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.492855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.492884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.493083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.493279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.493303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.493558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.493752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.493778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.494016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.494248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.494284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.494538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.494749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.494785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.494999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.495256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.495289] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.495520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.495719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.495743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.495978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.496185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.496222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.496473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.496703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.496732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.496952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.497216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.497242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.497476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.497715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.497750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.497999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.498238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.498276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.498552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.498758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.888 [2024-04-15 02:04:55.498784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.888 qpair failed and we were unable to recover it. 00:30:09.888 [2024-04-15 02:04:55.499013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.499266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.499300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.499579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.499818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.499855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.500113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.500352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.500378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.500607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.500806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.500832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.501029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.501236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.501262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.501512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.501777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.501814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.502030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.502264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.502300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.502543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.502743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.502769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.502995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.503241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.503267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.503493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.503711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.503737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.503934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.504163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.504164] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:09.889 [2024-04-15 02:04:55.504203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 [2024-04-15 02:04:55.504224] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.504449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.504710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.504759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:09.889 [2024-04-15 02:04:55.505058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.505259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:09.889 [2024-04-15 02:04:55.505286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:09.889 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.505488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.505716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.505754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.505978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.506189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.506228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.506452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.506716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.506745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.506970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.507176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.507203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.507431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.507664] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.507700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.507925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.508139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.508175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.508431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.508656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.508706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.508969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.509237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.509274] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.509521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.509767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.509793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.510021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.510275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.510312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.510556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.510795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.510831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.511042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.511277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.511304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.511536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.511740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.511777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.160 qpair failed and we were unable to recover it. 00:30:10.160 [2024-04-15 02:04:55.512025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.160 [2024-04-15 02:04:55.512279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.512316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.512559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.512764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.512792] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.513013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.513217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.513245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.513500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.513742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.513778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.514028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.514277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.514305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.514567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.514764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.514790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.515024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.515255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.515297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.515548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.515769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.515806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.516030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.516257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.516287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.516553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.516819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.516856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.517110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.517379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.517417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.517662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.517898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.517925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.518171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.518401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.518437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.518687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.518927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.518963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.519186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.519420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.519449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.519677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.519920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.519947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.520199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.520483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.520531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.520757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.520978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.521004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.521241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.521440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.521466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.521696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.521886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.521914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.522137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.522389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.522414] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.522633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.522853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.522879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.523070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.523297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.523322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.523544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.523793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.523833] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.524088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.524341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.524381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.524607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.524845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.524870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.161 qpair failed and we were unable to recover it. 00:30:10.161 [2024-04-15 02:04:55.525121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.161 [2024-04-15 02:04:55.525344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.525369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.525647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.525879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.525905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.526128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.526374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.526400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.526671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.526917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.526958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.527214] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.527444] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.527469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.527733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.527924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.527949] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.528172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.528380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.528405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.528659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.528881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.528906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.529109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.529333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.529359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.529577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.529788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.529813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.530056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.530278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.530306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.530614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.530854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.530894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.531143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.531339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.531380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.531630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.531890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.531916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.532139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.532358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.532384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.532607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.532824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.532851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.533054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.533319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.533359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.533618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.533835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.533860] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.534150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.534364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.534390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.534649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.534887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.534913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.535143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.535379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.535404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.535634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.535872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.535898] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.536173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.536376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.536401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.536607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.536857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.536883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.537105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.537355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.537380] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.537662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.537946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.537971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.538165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.538396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.538420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.538620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.538843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.538869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.539122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.539321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.539347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.162 qpair failed and we were unable to recover it. 00:30:10.162 [2024-04-15 02:04:55.539592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.162 [2024-04-15 02:04:55.539840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.539864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.540182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.540400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.540425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.540647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.540882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.540906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.541142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.541377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.541402] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.541663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.541915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.541940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.542170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.542369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.542393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.542707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.542905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.542929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.543166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.543405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.543446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.543706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.543934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.543959] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.544210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.544456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.544482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.544761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.545106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.545131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.545380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.545593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.545619] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.545842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 EAL: No free 2048 kB hugepages reported on node 1 00:30:10.163 [2024-04-15 02:04:55.546064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.546094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.546293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.546547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.546572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.546808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.547027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.547069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.547293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.547517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.547541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.547742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.548139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.548166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.548418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.548673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.548711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.549030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.549285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.549311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.549555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.549750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.549776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.549973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.550196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.550222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.550420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.550645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.550670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.550885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.551105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.551135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.551387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.551626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.551652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.551903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.552127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.552153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.163 qpair failed and we were unable to recover it. 00:30:10.163 [2024-04-15 02:04:55.552376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.552574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.163 [2024-04-15 02:04:55.552600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.552813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.553037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.553069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.553298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.553527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.553552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.553802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.554019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.554044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.554297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.554527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.554552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.554805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.555000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.555026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.555288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.555533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.555559] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.555779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.556302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.556734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.556972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.557276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.557500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.557528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.557808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.557994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.558019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.558246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.558464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.558490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.558776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.559246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.559708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.559948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.560148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.560385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.560411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.560626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.560930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.560955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.561227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.561463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.561488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.561737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.561982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.562007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.562261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.562505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.562531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.562748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.562970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.562996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.563218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.563447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.563472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.563712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.563930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.563955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.564179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.564368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.564393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.564583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.564824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.564850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.565115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.565362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.565387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.565608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.565828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.565853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.164 qpair failed and we were unable to recover it. 00:30:10.164 [2024-04-15 02:04:55.566082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.164 [2024-04-15 02:04:55.566308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.566334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.566597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.566822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.566847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.567076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.567321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.567346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.567551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.567771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.567796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.568023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.568289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.568315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.568538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.568784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.568809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.569040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.569267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.569293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.569537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.569780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.569805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.570007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.570211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.570237] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.570460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.570686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.570711] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.570910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.571139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.571167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.571392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.571582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.571607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.571810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572064] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.572289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.572768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.572988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.573235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.573429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.573455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.573656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.573904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.573930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.574177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.574399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.574424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.574648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.574865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.574891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.575138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.575367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.575392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.575596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.575843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.575872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.576096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.576282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.576308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.576498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.576744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.165 [2024-04-15 02:04:55.576769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.165 qpair failed and we were unable to recover it. 00:30:10.165 [2024-04-15 02:04:55.576959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.577181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.577207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.577399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.577648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.577674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.577874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.578087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.578113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.578341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.578589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.578615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.578862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.579082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.579110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.579334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.579553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.579579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.579775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.579863] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:10.166 [2024-04-15 02:04:55.579996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.580021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.580246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.580468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.580498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.580745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.580966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.580992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.581186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.581570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.581595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.581845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.582092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.582118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.582337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.582553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.582579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.582802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.583026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.583058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.583284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.583521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.583546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.583809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.584244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.584269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.584491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.584864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.584889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.585140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.585362] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.585388] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.585630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.585926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.585951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.586209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.586406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.586432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.586654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.586880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.586907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.587153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.587399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.587425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.587673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.587891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.587917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.588210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.588471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.588497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.588721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.588971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.588996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.589288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.589508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.589534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.589943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.590201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.590227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.590436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.590806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.590831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.166 qpair failed and we were unable to recover it. 00:30:10.166 [2024-04-15 02:04:55.591084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.166 [2024-04-15 02:04:55.591312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.591338] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.591661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.591916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.591941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.592168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.592367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.592392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.592609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.592855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.592896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.593128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.593522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.593564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.593836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.594054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.594082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.594326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.594571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.594597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.594847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.595084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.595110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.595337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.595622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.595648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.595886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.596082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.596110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.596333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.596576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.596618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.596867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.597099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.597127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.597402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.597605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.597632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.597882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.598109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.598137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.598397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.598660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.598686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.598954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.599176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.599203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.599421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.599643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.599670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.599945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.600212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.600240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.600500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.600926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.600951] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.601197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.601407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.601434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.601668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.601923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.601950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.602225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.602447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.602474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.602750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.602984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.603009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.603272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.603502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.603528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.603780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.604026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.604062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.604272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.604513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.604541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.167 [2024-04-15 02:04:55.604794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.604995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.167 [2024-04-15 02:04:55.605021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.167 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.605322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.605524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.605550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.605753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.605986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.606014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.606286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.606537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.606564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.606785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.607002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.607029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.607268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.607496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.607528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.607798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.608021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.608065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.608286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.608531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.608573] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.608799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.609027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.609076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.609399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.609689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.609715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.609984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.610240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.610267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.610492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.610740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.610766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.611066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.611301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.611327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.611584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.611804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.611831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.612098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.612333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.612359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.612621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.612830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.612856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.613097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.613345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.613372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.613613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.613810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.613837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.614024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.614277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.614304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.614527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.614755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.614782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.615028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.615254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.615281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.615509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.615729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.615756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.616006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.616227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.616254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.616487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.616723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.616750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.616975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.617203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.617230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.617478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.617683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.617710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.617948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.618211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.618238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.618464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.618688] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.618715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.618994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.619227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.619256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.168 qpair failed and we were unable to recover it. 00:30:10.168 [2024-04-15 02:04:55.619510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.168 [2024-04-15 02:04:55.619712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.619738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.619947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.620177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.620205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.620456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.620649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.620676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.620929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.621180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.621207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.621429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.621673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.621700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.621937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.622190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.622217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.622474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.622695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.622721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.622979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.623204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.623230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.623432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.623654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.623681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.623897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.624101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.624129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.624325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.624550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.624577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.624772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.624989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.625014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.625258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.625480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.625506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.625729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.625952] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.625978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.626176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.626534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.626560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.626781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.626980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.627005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.627255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.627507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.627533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.627779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.628274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.628752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.628974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.629205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.629541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.629567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.629815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.630067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.630093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.630327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.630577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.630603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.630828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.631287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.631743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.631961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.632184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.632399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.632425] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.169 [2024-04-15 02:04:55.632657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.632893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.169 [2024-04-15 02:04:55.632924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.169 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.633181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.633406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.633432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.633679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.633897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.633923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.634136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.634387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.634413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.634636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.634857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.634882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.635105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.635332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.635372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.635701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.635930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.635955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.636170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.636402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.636430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.636667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.636914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.636941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.637175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.637401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.637428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.637687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.637904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.637934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.638187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.638388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.638413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.638635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.638916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.638941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.639239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.639431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.639458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.639670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.639927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.639953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.640185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.640447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.640473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.640666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.640878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.640905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.641116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.641314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.641340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.641617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.641847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.641874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.642153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.642438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.642465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.642669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.642898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.642927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.170 qpair failed and we were unable to recover it. 00:30:10.170 [2024-04-15 02:04:55.643137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.643366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.170 [2024-04-15 02:04:55.643392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.643660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.643873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.643900] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.644118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.644315] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.644342] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.644545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.644796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.644823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.645027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.645257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.645284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.645542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.645804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.645831] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.646060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.646284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.646311] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.646530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.646751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.646777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.646996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.647253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.647280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.647531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.647774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.647801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.648031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.648240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.648267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.648491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.648719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.648745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.648941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.649162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.649189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.649408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.649674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.649701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.649901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.650149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.650178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.650426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.650618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.650644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.650889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.651110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.651139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.651383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.651639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.651666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.651885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.652072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.652098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.652348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.652569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.652595] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.652815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.653042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.653077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.653295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.653520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.653546] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.653767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.653986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.654013] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.654298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.654527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.654553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.654839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.655107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.655134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.655357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.655587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.655612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.655849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.656089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.656116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.656383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.656602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.656628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.171 qpair failed and we were unable to recover it. 00:30:10.171 [2024-04-15 02:04:55.656890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.657115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.171 [2024-04-15 02:04:55.657143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.657395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.657608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.657635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.657860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.658117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.658145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.658339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.658642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.658683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.658925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.659169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.659197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.659426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.659644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.659671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.659871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.660064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.660090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.660318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.660506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.660532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.660743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661028] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.661230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.661657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.661921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.662178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.662379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.662406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.662606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.662796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.662828] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.663063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.663267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.663296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.663520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.663728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.663754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.663982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.664173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.664201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.664433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.664661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.664687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.664940] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.665164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.665192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.665438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.665657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.665683] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.665884] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.666185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.666210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.666426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.666660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.666685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.667000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.667233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.667261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.667507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.667758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.667784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.668017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.668258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.668286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.668535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.668784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.668809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.669097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.669380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.669405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.669654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.669883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.669909] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.670186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.670422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.670447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.172 qpair failed and we were unable to recover it. 00:30:10.172 [2024-04-15 02:04:55.670684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.670902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.172 [2024-04-15 02:04:55.670928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.671149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.671328] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:10.173 [2024-04-15 02:04:55.671395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.671419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.671446] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:10.173 [2024-04-15 02:04:55.671464] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:10.173 [2024-04-15 02:04:55.671478] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:10.173 [2024-04-15 02:04:55.671534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:30:10.173 [2024-04-15 02:04:55.671612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.671563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:30:10.173 [2024-04-15 02:04:55.671616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:30:10.173 [2024-04-15 02:04:55.671619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:30:10.173 [2024-04-15 02:04:55.671875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.671902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.672160] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.672380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.672405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.672605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.672854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.672880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.673129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.673487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.673512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.673779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.673976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.674002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.674262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.674454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.674482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.674729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.674951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.674976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.675223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.675429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.675455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.675684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.675898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.675923] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.676117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.676364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.676390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.676642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.676827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.676853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.677087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.677289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.677315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.677537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.677759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.677785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.678012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.678236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.678263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.678455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.678675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.678701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.678948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.679169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.679195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.679415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.679608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.679636] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.679841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.680066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.680092] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.680341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.680558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.680584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.680833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.681037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.681072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.681310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.681556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.681582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.681841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.682320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.682751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.682996] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.173 [2024-04-15 02:04:55.683220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.683409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.173 [2024-04-15 02:04:55.683434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.173 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.683620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.683848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.683873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.684130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.684322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.684347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.684567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.684792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.684817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.685003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.685234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.685260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.685484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.685679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.685704] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.685891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.686095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.686121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.686332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.686558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.686584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.686818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.687274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687526] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.687744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.687975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.688178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.688403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.688429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.688644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.688857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.688883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.689078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.689269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.689295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.689489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.689712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.689738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.689985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.690234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.690261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.690459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.690679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.690705] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.690904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.691109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.691140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.691335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.691561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.691587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.691810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.692025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.692057] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.692248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.692498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.692524] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.692725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.692977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.693002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.693195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.693424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.693450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.693648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.693865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.693891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.694115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.694338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.694366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.694621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.694866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.694891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.174 [2024-04-15 02:04:55.695089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.695317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.174 [2024-04-15 02:04:55.695343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.174 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.695538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.695730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.695756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.695957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.696189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.696216] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.696404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.696649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.696676] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.696880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.697072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.697099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.697320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.697693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.697717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.697936] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.698192] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.698218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.698437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.698654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.698680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.698897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.699121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.699147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.699364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.699582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.699607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.699823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.700297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.700768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.700989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.701220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.701446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.701472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.701670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.701864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.701890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.702113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.702308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.702336] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.702559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.702813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.702839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.703039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.703244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.703270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.703519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.703761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.703786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.704006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.704235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.704261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.704476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.704666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.704692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.704876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.705101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.705128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.705358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.705575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.705600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.705987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.706195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.706222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.706448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.706642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.706669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.175 qpair failed and we were unable to recover it. 00:30:10.175 [2024-04-15 02:04:55.706889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.175 [2024-04-15 02:04:55.707081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.707107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.707310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.707507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.707532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.707755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.707946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.707972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.708195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.708446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.708472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.708723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.708942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.708969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.709163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.709359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.709385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.709607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.709827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.709853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.710075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.710301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.710327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.710577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.710801] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.710827] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.711024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.711255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.711281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.711501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.711689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.711715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.711915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.712145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.712172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.712399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.712603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.712629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.712855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.713079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.713106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.713353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.713549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.713577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.713975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.714200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.714227] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.714426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.714650] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.714677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.714922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.715274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.715304] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.715511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.715767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.715805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.716019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.716240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.716279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.716538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.716748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.716788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.717060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.717305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.717343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.717620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.717827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.717853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.718079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.718317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.718355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.718608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.718827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.718864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.719087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.719301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.719328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.719551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.719776] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.719801] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.176 [2024-04-15 02:04:55.719997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.720196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.176 [2024-04-15 02:04:55.720239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.176 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.720462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.720735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.720772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.720994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.721358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.721396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.721629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.721842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.721879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.722131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.722354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.722381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.722568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.722757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.722795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.723069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.723293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.723330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.723576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.723842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.723869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.724103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.724300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.724326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.724544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.724733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.724759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.724959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.725154] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.725191] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.725449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.725660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.725697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.725971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.726207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.726235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.726451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.726671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.726697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.726928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.727146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.727185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.727400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.727642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.727679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.727931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.728132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.728159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.728352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.728568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.728606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.728823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.729321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729555] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.729765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.729993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.730256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.730653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.730689] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.730934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.731379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.731420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.731635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.731877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.731915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.732168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.732445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.732483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.732723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.732985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.733011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.733226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.733466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.733503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.733721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.733933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.733970] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.177 qpair failed and we were unable to recover it. 00:30:10.177 [2024-04-15 02:04:55.734223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.177 [2024-04-15 02:04:55.734420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.734447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.734653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.734853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.734879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.735099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.735357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.735394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.735646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.735872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.735910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.736177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.736390] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.736416] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.736608] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.736806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.736832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.737060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.737286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.737322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.737581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.737819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.737855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.738082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.738335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.738373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.738597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.738841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.738879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.739109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.739338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.739364] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.739556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.739779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.739817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.740038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.740277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.740315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.740536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.740747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.740782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.741014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.741225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.741251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.741469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.741700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.741727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.741930] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.742146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.742182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.742467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.742853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.742892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.743119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.743351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.743379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.743580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.743800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.743826] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.744020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.744249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.744286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.744511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.744786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.744822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.745057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.745255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.745281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.745514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.745758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.745807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.746030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.746296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.746340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.746582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.746791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.746820] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.747042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.747261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.747297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.747568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.747848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.747885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.748138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.748380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.748417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.178 qpair failed and we were unable to recover it. 00:30:10.178 [2024-04-15 02:04:55.748642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.178 [2024-04-15 02:04:55.748864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.748890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.749147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.749373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.749411] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.749663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.749910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.749948] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.750180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.750436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.750475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.750723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.750953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.750981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.751217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.751484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.751521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.751780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.752053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.752101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.752318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.752571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.752599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.752791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.753018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.753067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.753293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.753547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.753586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.753856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.754093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.754121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.754334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.754551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.754578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.754799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.755053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.755104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.755353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.755599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.755638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.755860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.756137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.756174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.756413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.756626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.756654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.756868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.757091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.757118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.757341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.757586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.757624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.757843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.758110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.758140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.758365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.758586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.758613] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.758814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.759063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.759102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.759349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.759560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.759600] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.759843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.760236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.760265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.179 qpair failed and we were unable to recover it. 00:30:10.179 [2024-04-15 02:04:55.760498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.179 [2024-04-15 02:04:55.760722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.760749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.761002] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.761231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.761257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.761476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.761722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.761759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.761988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.762256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.762295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.762545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.762766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.762793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.762994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.763188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.763214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.763409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.763603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.763630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.763856] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.764100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.764143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.764366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.764607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.764639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.764840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.765064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.765110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.765320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.765550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.765577] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.765823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.766023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.766059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.766271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.766484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.766522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15e1610 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.766782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.767273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.767699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.767932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.768159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.768382] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.768408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.768643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.768867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.768893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.769092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.769290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.769317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.769565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.769762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.769789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.769982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.770176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.770204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.770401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.770590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.770616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.770809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.771254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.771703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.771952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.772146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.772371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.772398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.772620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.772855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.772883] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.180 qpair failed and we were unable to recover it. 00:30:10.180 [2024-04-15 02:04:55.773081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.180 [2024-04-15 02:04:55.773290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.773317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.773513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.773737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.773763] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.773984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.774190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.774219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.774419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.774610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.774637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.774839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.775068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.775096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.775292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.775496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.775527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.775773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776053] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.776255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.776704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.776958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.777184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.777378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.777405] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.777633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.777825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.777852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.778079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.778302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.778329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.778529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.778746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.778772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.778966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.779161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.779190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.779381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.779602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.779628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.779817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.780279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.780722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.780987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.781193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.781419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.781448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.781646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.781892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.781919] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.782141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.782335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.782363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.782563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.782781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.782807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.783024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.783266] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.783294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.783516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.783706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.783733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.783948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.784194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.784221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.784420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.784612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.784643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.784865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.785092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.785140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.785325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.785545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.785571] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.785816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.786040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.181 [2024-04-15 02:04:55.786084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.181 qpair failed and we were unable to recover it. 00:30:10.181 [2024-04-15 02:04:55.786277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.786475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.786504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.786698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.786946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.786972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.787171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.787370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.787398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.787617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.787866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.787893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.788122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.788321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.788347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.788546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.788737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.788764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.788991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.789187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.789214] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.789471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.789698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.789724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.789950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.790184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.790211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.790472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.790669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.790699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.790922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.791123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.791150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.791374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.791601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.791628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.791853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.792102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.792130] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.792323] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.792547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.792576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.792769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.792989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.793016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.793254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.793451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.793477] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.793668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.793867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.793895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.794118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.794306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.794333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.794550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.794748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.794775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.795014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.795212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.795240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.182 qpair failed and we were unable to recover it. 00:30:10.182 [2024-04-15 02:04:55.795439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.795653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.182 [2024-04-15 02:04:55.795679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.795904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.796124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.796152] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.796402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.796591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.796618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.796809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.797028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.797060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.797280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.797526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.797553] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.797784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.797972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.798001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.798229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.798417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.798446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.798700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.798947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.798974] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.799169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.799392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.799419] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.799640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.799886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.799913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.800137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.800354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.800381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.800573] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.800795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.800822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.801054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.801260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.801288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.801509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.801761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.801788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.802012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.802219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.802247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.802436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.802681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.802708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.802929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.803176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.803204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.803402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.803628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.803654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.803848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.804071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.804099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.804322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.804540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.804566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.804789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.805237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.805708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.805955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.806175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.806377] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.453 [2024-04-15 02:04:55.806404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.453 qpair failed and we were unable to recover it. 00:30:10.453 [2024-04-15 02:04:55.806626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.806812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.806838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.807050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.807242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.807270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.807469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.807672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.807701] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.807926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.808148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.808175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.808363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.808562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.808589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.808809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.809056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.809083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.809283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.809508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.809535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.809732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.809983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.810010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.810245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.810437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.810465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.810687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.810914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.810940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.811138] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.811356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.811382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.811567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.811788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.811814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.812029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.812259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.812286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.812488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.812709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.812736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.812923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.813118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.813147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.813370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.813617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.813644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.813829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.814052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.814080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.814301] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.814495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.814522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.814734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.814983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.815010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.815222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.815443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.815471] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.815699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.815898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.815925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.816149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.816396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.816423] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.816649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.816875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.816901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.817124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.817317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.817346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.817571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.817762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.817790] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.818039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.818253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.818282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.818531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.818758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.818785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.818978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.819176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.819204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.454 qpair failed and we were unable to recover it. 00:30:10.454 [2024-04-15 02:04:55.819400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.454 [2024-04-15 02:04:55.819620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.819648] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.819863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.820115] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.820142] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.820327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.820543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.820569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.820783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.820981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.821009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.821220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.821478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.821505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.821714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.821903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.821931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.822152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.822376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.822404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.822616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.822834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.822861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.823059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.823253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.823280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.823497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.823693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.823720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.823943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.824140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.824167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.824387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.824613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.824642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.824860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.825082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.825109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.825326] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.825521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.825550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.825769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.825988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.826015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.826277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.826505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.826532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.826754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.826947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.826976] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.827209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.827459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.827486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.827716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.827914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.827952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.828148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.828391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.828418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.828668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.828912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.828939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.829164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.829384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.829410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.829604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.829825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.829851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.830043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.830252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.830278] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.830459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.830661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.830688] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.830932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.831132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.831161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.831355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.831579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.831605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.831862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.832086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.832113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.455 qpair failed and we were unable to recover it. 00:30:10.455 [2024-04-15 02:04:55.832336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.455 [2024-04-15 02:04:55.832554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.832581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.832820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.833050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.833077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.833305] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.833530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.833556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.833781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.834011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.834038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.834285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.834549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.834575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.834792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.835265] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.835695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.835907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.836137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.836358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.836384] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.836598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.836846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.836872] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.837065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.837267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.837293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.837540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.837735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.837762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.837951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.838174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.838201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.838424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.838640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.838666] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.838867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.839341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839554] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.839749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.839997] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.840218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.840471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.840497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.840696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.840890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.840918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.841140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.841363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.841389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.841621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.841843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.841870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.842102] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.842325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.842352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.842543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.842766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.842794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.843042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.843289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.843317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.843511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.843700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.843727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.843954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.844177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.844204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.844428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.844646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.844672] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.844887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.845085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.845121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.456 [2024-04-15 02:04:55.845340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.845591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.456 [2024-04-15 02:04:55.845617] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.456 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.845815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.846259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.846698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.846946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.847167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.847386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.847413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.847604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.847829] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.847855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.848078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.848279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.848305] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.848553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.848780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.848807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.848998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.849231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.849259] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.849472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.849666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.849696] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.849922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.850119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.850146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.850342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.850536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.850564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.850788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.850975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.851003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.851208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.851431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.851460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.851687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.851937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.851963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.852180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.852395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.852421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.852672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.852866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.852894] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.853118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.853309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.853335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.853555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.853741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.853767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.853990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.854210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.854242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.854457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.854682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.457 [2024-04-15 02:04:55.854707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.457 qpair failed and we were unable to recover it. 00:30:10.457 [2024-04-15 02:04:55.854925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.855145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.855172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.855373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.855560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.855585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.855804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.856253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856506] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.856730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.856973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.857200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.857428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.857454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.857669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.857914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.857940] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.858165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.858371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.858397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.858589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.858815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.858845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.859032] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.859239] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.859267] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.859487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.859729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.859756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.859956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.860173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.860200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.860447] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.860634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.860660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.860860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.861058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.861085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.861282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.861531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.861557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.861761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.862228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.862634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.862914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.863165] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.863357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.863385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.863595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.863813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.863839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.864054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.864254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.864280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.864496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.864694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.864719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.864909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.865107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.865135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.865342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.865561] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.865587] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.865806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.865990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.866016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.866253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.866453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.866479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.866698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.866885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.866911] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.867114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.867361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.867387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.458 qpair failed and we were unable to recover it. 00:30:10.458 [2024-04-15 02:04:55.867578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.458 [2024-04-15 02:04:55.867796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.867822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.868056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.868243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.868270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.868469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.868659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.868684] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.868904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.869131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.869157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.869351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.869601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.869627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.869851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.870044] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.870075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.870272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.870494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.870520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.870768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.870982] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.871007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.871234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.871460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.871486] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.871703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.871900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.871926] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.872147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.872335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.872361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.872565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.872750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.872776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.872986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.873177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.873205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.873399] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.873647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.873673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.873872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.874096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.874123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.874321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.874524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.874550] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.874777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.874999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.875025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.875238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.875443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.875469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.875686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.875875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.875903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.876107] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.876299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.876331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.876547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.876770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.876796] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.877014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.877212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.877238] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.877459] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.877684] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.877712] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.877909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.878128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.878154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.878352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.878579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.878605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.878853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.879050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.879077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.879304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.879552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.459 [2024-04-15 02:04:55.879579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.459 qpair failed and we were unable to recover it. 00:30:10.459 [2024-04-15 02:04:55.879779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880062] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.880288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880519] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.880714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.880952] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.881179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.881365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.881391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.881647] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.881872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.881899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.882124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.882320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.882351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.882537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.882753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.882779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.882971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.883159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.883186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.883411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.883601] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.883627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.883845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.884289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.884759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.884975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.885208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.885462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.885489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.885710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.885908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.885934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.886167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.886356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.886382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.886599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.886790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.886817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.887037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.887282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.887319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.887513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.887732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.887759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.887984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.888204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.888230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.888478] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.888675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.888700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.888898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.889114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.889150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.889376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.889571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.889597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.889818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.890039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.890074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.890304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.893242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.893284] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.893506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.893713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.893741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.893946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.894176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.894203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.894424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.894615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.894642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.460 qpair failed and we were unable to recover it. 00:30:10.460 [2024-04-15 02:04:55.894862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.895061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.460 [2024-04-15 02:04:55.895097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.895319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.895512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.895538] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.895761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.895981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.896007] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.896257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.896458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.896484] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.896707] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.896898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.896928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.897151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.897348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.897375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.897563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.897784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.897811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.898040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.898234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.898261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.898455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.898648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.898674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.898857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899094] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.899309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.899728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.899950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.900149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.900342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.900371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.900585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.900777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.900803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.900986] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.901241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.901268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.901462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.901681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.901708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.901928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.902116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.902143] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.902368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.902569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.902597] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.902845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.903297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.903752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.903992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.904179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.904373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.904400] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.904624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.904876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.904903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.905121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.905342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.905369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.905570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.905815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.905842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.906028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.906263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.906290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.906508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.906730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.906757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.461 qpair failed and we were unable to recover it. 00:30:10.461 [2024-04-15 02:04:55.906974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.461 [2024-04-15 02:04:55.907183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.907210] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.907401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.907620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.907646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.907871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.908095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.908122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.908339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.908572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.908599] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.908798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.908989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.909016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.909253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.909481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.909507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.909699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.909900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.909929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.910153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.910338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.910365] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.910614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.910840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.910868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.911098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.911322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.911348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.911568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.911767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.911794] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.912012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.912237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.912263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.912452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.912668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.912694] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.912938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.913137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.913163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.913385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.913613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.913639] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.913892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.914097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.914124] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.914346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.914591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.914618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.914815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915038] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.915269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915488] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.915713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.915934] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.916153] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.916341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.916372] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.916598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.916819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.916845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.917043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.917304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.917331] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.462 [2024-04-15 02:04:55.917581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.917795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.462 [2024-04-15 02:04:55.917821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.462 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.918038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.918253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.918282] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.918505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.918753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.918779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.918972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.919173] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.919200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.919419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.919642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.919668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.919916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.920114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.920141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.920397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.920611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.920637] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.920849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921107] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.921306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.921747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.921986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.922208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.922467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.922493] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.922713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.922927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.922954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.923204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.923426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.923452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.923648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.923901] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.923928] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.924141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.924368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.924395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.924582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.924784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.924810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.925000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.925231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.925258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.925484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.925705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.925736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.925954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.926170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.926197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.926416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.926599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.926625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.926842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.927322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.927731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.927973] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.928223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.928446] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.928472] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.928725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.928950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.928977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.929215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.929434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.929459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.929651] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.929871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.929897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.930134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.930325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.930356] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.463 qpair failed and we were unable to recover it. 00:30:10.463 [2024-04-15 02:04:55.930577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.463 [2024-04-15 02:04:55.930779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.930805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.931024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.931287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.931314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.931513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.931730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.931757] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.931949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.932168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.932196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.932397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.932581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.932607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.932800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.933252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.933726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.933945] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.934166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.934350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.934376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.934606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.934817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.934843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.935070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.935320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.935346] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.935593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.935839] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.935865] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.936065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.936289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.936315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.936574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.936770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.936795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.936981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.937174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.937201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.937391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.937604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.937630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.937846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.938312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.938738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.938987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.939174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.939401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.939428] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.939663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.939879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.939905] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.940103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.940335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.940361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.940577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.940822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.940849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.941103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.941299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.941325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.941545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.941765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.941791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.942036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.942232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.942257] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.942507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.942726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.942752] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.464 qpair failed and we were unable to recover it. 00:30:10.464 [2024-04-15 02:04:55.942942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.943168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.464 [2024-04-15 02:04:55.943196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.943445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.943661] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.943687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.943911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.944110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.944138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.944389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.944618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.944645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.944836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.945056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.945082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.945329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.945554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.945581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.945795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.946233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.946663] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.946910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.947125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.947324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.947350] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.947563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.947758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.947784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.947980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.948172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.948199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.948388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.948602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.948627] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.948862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.949089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.949117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.949348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.949540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.949566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.949787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.950255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950502] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.950750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.950968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.951188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.951369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.951395] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.951642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.951864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.951892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.952114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.952335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.952361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.952589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.952789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.952814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.953007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.953232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.953258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.953479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.953723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.953749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.953944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.954136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.954162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.954378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.954628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.954661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.954878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.955108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.955134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.465 [2024-04-15 02:04:55.955328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.955548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.465 [2024-04-15 02:04:55.955575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.465 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.955771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.955951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.955978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.956195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.956414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.956441] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.956635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.956822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.956849] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.957067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.957302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.957330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.957551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.957748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.957777] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.957968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.958161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.958187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.958408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.958620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.958646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.958841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.959062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.959100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.959327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.959550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.959576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.959769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.959995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.960021] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.960234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.960467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.960495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.960745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.960942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.960969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.961163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.961349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.961376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.961623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.961812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.961838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.962086] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.962287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.962316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.962567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.962752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.962779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.962968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.963222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.963248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.963473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.963670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.963697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.963914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.964136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.964163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.964409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.964628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.964655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.964867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.965063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.965102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.965303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.965536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.965563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.965756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.965983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.966010] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.966233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.966460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.966487] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.966737] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.966933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.966961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.967168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.967366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.967394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.466 [2024-04-15 02:04:55.967618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.967820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.466 [2024-04-15 02:04:55.967847] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.466 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.968036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.968275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.968314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.968503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.968710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.968736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.968954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.969177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.969204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.969398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.969590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.969616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.969806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970070] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.970270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.970718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.970982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.971182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.971366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.971393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.971638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.971846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.971873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.972071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.972300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.972337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.972534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.972755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.972782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.973003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.973238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.973265] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.973462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.973658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.973686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.973937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.974134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.974161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.974380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.974572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.974598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.974787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975040] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.975284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975513] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.975706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975895] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.975921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.976145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.976379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.976406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.976635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.976851] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.976877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.977087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.977274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.977300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.977558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.977783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.977809] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.978026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.978233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.467 [2024-04-15 02:04:55.978261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.467 qpair failed and we were unable to recover it. 00:30:10.467 [2024-04-15 02:04:55.978461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.978653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.978680] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.978910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.979135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.979162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.979359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.979580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.979606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.979822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.980025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.980065] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.980299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.980530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.980556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.980754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.980978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.981005] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.981220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.981406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.981432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.981627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.981850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.981877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.982074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.982271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.982297] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.982484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.982695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.982721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.982907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.983103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.983129] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.983348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.983548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.983575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.983803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.984250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984487] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984514] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.984734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984927] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.984956] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.985157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.985360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.985387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.985600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.985787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.985813] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.986037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.986269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.986295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.986488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.986712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.986739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.986971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.987162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.987189] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.987383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.987615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.987641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.987840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.988275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.988716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.988958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.989151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.989344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.989371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.989555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.989787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.989818] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.990014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.990250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.990277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.468 qpair failed and we were unable to recover it. 00:30:10.468 [2024-04-15 02:04:55.990474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.468 [2024-04-15 02:04:55.990696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.990723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.990969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.991159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.991185] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.991391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.991585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.991611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.991837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992068] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.992286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992543] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.992741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.992992] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.993252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.993508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.993535] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.993730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.993920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.993946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.994166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.994383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.994415] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.994610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.994841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.994868] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.995078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.995277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.995303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.995507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.995755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.995781] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.995996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.996198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.996225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.996421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.996638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.996664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.996904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.997124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.997150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.997385] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.997579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.997605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.997836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.998313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.998759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.998981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.999196] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.999383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.999410] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:55.999625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.999850] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:55.999878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.000100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.000296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.000335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.000523] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.000721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.000750] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.000948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.001171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.001198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.001412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.001646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.001673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.001904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.002101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.002127] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.002355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.002576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.002603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.469 qpair failed and we were unable to recover it. 00:30:10.469 [2024-04-15 02:04:56.002834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.469 [2024-04-15 02:04:56.003026] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.003060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.003295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.003495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.003528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.003725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.003948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.003975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.004198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.004423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.004450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.004672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.004859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.004885] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.005079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.005288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.005327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.005532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.005744] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.005771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.005987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.006176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.006204] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.006400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.006589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.006616] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.006804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.007028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.007061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.007297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.007511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.007539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.007791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.008255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.008721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.008975] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.009216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.009440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.009468] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.009665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.009898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.009925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.010124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.010320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.010347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.010545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.010741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.010769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.010991] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.011213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.011240] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.011465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.011716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.011743] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.011989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.012205] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.012232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.012456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.012689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.012715] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.012973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.013167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.013195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.013394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.013621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.013647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.013844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.014263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.014687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.014941] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.470 [2024-04-15 02:04:56.015167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.015400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.470 [2024-04-15 02:04:56.015427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.470 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.015619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.015805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.015832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.016029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.016267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.016295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.016517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.016739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.016767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.016958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.017163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.017190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.017415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.017633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.017659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.017854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.018082] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.018110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.018298] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.018521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.018547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.018774] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.018973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.019001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.019209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.019411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.019437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.019658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.019846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.019873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.020074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.020274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.020301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.020490] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.020681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.020708] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.020921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.021145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.021173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.021366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.021616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.021643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.021874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.022071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.022099] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.022319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.022520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.022549] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.022768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.022984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.023011] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.023241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.023434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.023461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.023656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.023847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.023874] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.024101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.024324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.024351] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.024536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.024750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.024776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.025035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.025268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.025295] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.025516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.025738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.025764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.026012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.026247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.026275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.026470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.026691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.026717] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.026914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.027140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.027168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.027388] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.027607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.027634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.027820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.028017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.471 [2024-04-15 02:04:56.028052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.471 qpair failed and we were unable to recover it. 00:30:10.471 [2024-04-15 02:04:56.028245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.028465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.028494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.028713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.028908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.028936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.029159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.029353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.029379] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.029599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.029843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.029870] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.030093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.030320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.030347] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.030533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.030752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.030779] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.030972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.031164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.031192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.031411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.031628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.031655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.031870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.032090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.032117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.032343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.032560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.032586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.032779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.032972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.033000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.033210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.033433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.033459] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.033706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.033904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.033930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.034184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.034386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.034412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.034598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.034824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.034851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.035076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.035269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.035298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.035527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.035746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.035773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.035971] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.036174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.036202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.036423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.036652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.036679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.036868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.037092] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.037120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.037339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.037550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.037576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.037796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.038041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.038071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.038287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.038535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.038562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.472 qpair failed and we were unable to recover it. 00:30:10.472 [2024-04-15 02:04:56.038779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.472 [2024-04-15 02:04:56.039004] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.039030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.039233] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.039426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.039453] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.039676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.039893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.039920] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.040169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.040360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.040387] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.040600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.040795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.040821] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.041025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.041235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.041263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.041488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.041711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.041738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.041928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.042145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.042172] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.042364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.042584] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.042611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.042803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.043243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.043697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.043924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.044176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.044366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.044393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.044585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.044814] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.044842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.045041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.045272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.045299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.045488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.045716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.045742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.045994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.046212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.046239] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.046458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.046642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.046668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.046859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.047300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.047747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.047962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.048189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.048411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.048438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.048659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.048883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.048910] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.049100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.049302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.049329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.049547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.049773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.049800] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.473 qpair failed and we were unable to recover it. 00:30:10.473 [2024-04-15 02:04:56.050017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.473 [2024-04-15 02:04:56.050253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.050281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.050477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.050703] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.050730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.050950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.051203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.051230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.051415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.051600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.051626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.051846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.052302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.052728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.052978] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.053169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.053393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.053422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.053652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.053907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.053935] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.054141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.054366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.054393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.054641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.054869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.054895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.055093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.055316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.055344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.055536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.055729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.055758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.056010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.056246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.056276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.056500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.056716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.056742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.056985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.057201] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.057228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.057456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.057644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.057673] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.057890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.058090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.058118] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.058310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.058539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.058566] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.058787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059030] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.059287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.059769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.059984] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.060202] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.060393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.060421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.060670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.060887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.060914] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.061139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.061370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.061396] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.061618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.061813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.061841] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.062064] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.062294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.062322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.062550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.062771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.062799] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.474 qpair failed and we were unable to recover it. 00:30:10.474 [2024-04-15 02:04:56.062993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.474 [2024-04-15 02:04:56.063245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.063276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.063495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.063718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.063745] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.063966] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.064206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.064233] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.064456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.064673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.064700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.064919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.065145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.065174] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.065403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.065626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.065653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.065873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.066099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.066126] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.066346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.066566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.066592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.066785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067032] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.067234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067483] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067509] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.067742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.067993] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.068250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.068448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.068475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.068671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.068890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.068917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.069117] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.069304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.069330] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.069520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.069738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.069765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.069990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.070207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.070235] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.070419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.070648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.070675] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.070937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.071159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.071186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.071404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.071634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.071660] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.071860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.072089] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.072116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.072369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.072626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.072657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.072877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.073066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.073093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.073295] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.073510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.073537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.073786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.073977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.074003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.074229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.074422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.074449] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.074674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.074897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.074924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.475 qpair failed and we were unable to recover it. 00:30:10.475 [2024-04-15 02:04:56.075144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.475 [2024-04-15 02:04:56.075371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.075398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.075598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.075822] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.075850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.076066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.076316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.076343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.076532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.076759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.076785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.077001] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.077228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.077260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.077509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.077699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.077727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.077977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.078195] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.078222] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.078468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.078693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.078720] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.078917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.079136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.079163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.079381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.079593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.079620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.079843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.080327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.080767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.080985] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.081181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.081407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.081434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.081662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.081874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.081901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.082123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.082345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.082371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.082588] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.082789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.082815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.083042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.083237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.083266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.083488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.083723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.083751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.083969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.084199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.084226] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.084422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.084638] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.084664] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.084857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.085084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.085110] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.085333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.085548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.085574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.085800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.086244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.086674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.086927] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.087152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.087346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.087371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.476 qpair failed and we were unable to recover it. 00:30:10.476 [2024-04-15 02:04:56.087591] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.476 [2024-04-15 02:04:56.087792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.477 [2024-04-15 02:04:56.087819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.477 qpair failed and we were unable to recover it. 00:30:10.477 [2024-04-15 02:04:56.088055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.477 [2024-04-15 02:04:56.088277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.477 [2024-04-15 02:04:56.088303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.477 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.088497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.088745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.088772] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.088983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.089179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.089206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.089425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.089672] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.089697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.089896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.090119] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.090146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.090395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.090612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.090638] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.090837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.091058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.091085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.091334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.091551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.091579] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.091771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.092031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.092063] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.092307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.092534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.092560] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.092785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.093006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.093031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.093260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.093510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.093537] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.093755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.093977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.094004] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.094240] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.094429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.094455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.094676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.094862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.094888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.095139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.095386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.095412] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.095609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.095859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.095886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.096120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.096368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.096394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.096587] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.096812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.096837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.097069] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.097306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.748 [2024-04-15 02:04:56.097333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.748 qpair failed and we were unable to recover it. 00:30:10.748 [2024-04-15 02:04:56.097554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.097779] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.097808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.098027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.098289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.098316] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.098513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.098708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.098736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.098937] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.099161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.099188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.099415] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.099607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.099634] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.099847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.100074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.100104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.100304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.100547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.100574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.100769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.100993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.101019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.101237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.101455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.101482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.101710] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.101938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.101966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.102188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.102404] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.102430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.102629] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.102852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.102878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.103078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.103309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.103335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.103559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.103745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.103771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.104018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.104252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.104279] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.104537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.104785] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.104811] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.105021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.105231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.105258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.105469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.105687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.105713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.105912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.106133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.106161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.106387] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.106614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.106641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.106844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.107066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.107105] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.107328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.107514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.107540] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.107784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108026] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.108255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108456] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.108697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.108904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.109128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.109349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.109375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.109626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.109847] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.109873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.110104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.110288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.110322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.749 qpair failed and we were unable to recover it. 00:30:10.749 [2024-04-15 02:04:56.110541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.749 [2024-04-15 02:04:56.110740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.110768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.111015] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.111244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.111271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.111461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.111649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.111677] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.111900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.112122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.112149] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.112367] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.112618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.112644] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.112854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.113077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.113103] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.113307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.113551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.113578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.113799] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.114278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.114754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.114998] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.115194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.115394] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.115420] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.115614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.115835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.115861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.116100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.116296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.116329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.116520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.116743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.116769] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.116958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.117150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.117177] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.117383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.117602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.117628] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.117821] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.118318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.118757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.118991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.119187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.119413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.119440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.119637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.119828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.119854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.120093] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.120314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.120341] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.120538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.120735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.120762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.120960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.121176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.121203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.121413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.121604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.121632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.121824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.122041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.122073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.122269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.122510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.122536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.122757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.122975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.123001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.123210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.123408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.750 [2024-04-15 02:04:56.123435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.750 qpair failed and we were unable to recover it. 00:30:10.750 [2024-04-15 02:04:56.123655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.123862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.123888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.124118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.124344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.124370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.124586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.124777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.124803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.125029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.125235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.125263] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.125475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.125665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.125690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.125888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.126113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.126141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.126344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.126563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.126589] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.126808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.127269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127512] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.127731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127924] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.127950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.128143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.128398] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.128424] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.128643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.128842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.128867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.129099] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.129297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.129324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.129524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.129738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.129764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.129981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.130198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.130225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.130448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.130667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.130693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.130915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.131136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.131163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.131352] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.131546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.131574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.131767] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.131989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.132015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.132221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.132454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.132480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.132728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.132920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.132950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.133172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.133422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.133448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.133668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.133860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.133886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.134110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.134317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.134343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.134593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.134836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.134861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.135083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.135277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.135309] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.135501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.135716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.135742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.135956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.136178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.136205] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.751 [2024-04-15 02:04:56.136434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.136622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.751 [2024-04-15 02:04:56.136647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.751 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.136843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.137078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.137106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.137337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.137579] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.137609] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.137828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.138340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138558] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.138759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138960] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.138986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.139213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.139412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.139437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.139657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.139849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.139875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.140129] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.140351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.140378] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.140596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.140784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.140812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.141005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.141245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.141272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.141496] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.141716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.141742] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.141961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.142221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.142253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.142454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.142644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.142671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.142870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.143110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.143137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.143383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.143604] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.143629] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.143818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.144275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.144754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.144999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.145223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.145433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.145458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.145682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.145903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.145929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.146123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.146346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.146371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.146563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.146783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.146814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.752 [2024-04-15 02:04:56.147036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.147282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.752 [2024-04-15 02:04:56.147308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.752 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.147500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.147693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.147721] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.147919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.148134] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.148161] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.148376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.148595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.148621] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.148844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.149038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.149069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.149297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.149513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.149539] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.149760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.149980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.150006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.150203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.150434] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.150460] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.150693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.150945] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.150971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.151186] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.151372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.151398] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.151598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.151813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.151838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.152096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.152317] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.152343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.152565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.152759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.152788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.153008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.153244] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.153271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.153499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.153697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.153725] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.153949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.154166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.154192] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.154409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.154627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.154653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.154872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.155095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.155122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.155345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.155539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.155567] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.155795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156073] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.156279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.156730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.156982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.157203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.157431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.157457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.157656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.157879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.157906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.158105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.158303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.158328] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.158574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.158795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.158822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.159023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.159262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.159288] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.159509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.159702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.159729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.159981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.160176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.160202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.753 qpair failed and we were unable to recover it. 00:30:10.753 [2024-04-15 02:04:56.160429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.753 [2024-04-15 02:04:56.160673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.160699] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.160955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.161179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.161206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.161464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.161683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.161709] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.161932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.162122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.162148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.162397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.162626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.162653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.162904] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.163113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.163140] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.163342] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.163535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.163561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.163805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.163998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.164024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.164306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.164492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.164517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.164717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.164906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.164932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.165132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.165322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.165358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.165597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.165813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.165839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.166060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.166249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.166276] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.166505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.166754] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.166780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.166999] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.167228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.167255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.167472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.167693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.167719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.167964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.168191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.168218] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.168405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.168633] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.168659] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.168915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.169141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.169169] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.169397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.169617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.169643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.169865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.170083] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.170120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.170347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.170548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.170575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.170789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.171238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.171704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.171946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.172172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.172365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.172391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.172620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.172841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.172867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.173067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.173267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.173293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.173540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.173728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.173754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.754 qpair failed and we were unable to recover it. 00:30:10.754 [2024-04-15 02:04:56.173946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.754 [2024-04-15 02:04:56.174161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.174187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.174403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.174656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.174681] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.174878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.175130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.175156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.175383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.175632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.175658] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.175869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176104] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.176328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.176766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.176981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.177199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.177422] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.177448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.177698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.177916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.177942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.178163] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.178380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.178406] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.178627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.178818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.178844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.179042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.179245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.179272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.179473] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.179723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.179749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.179994] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.180193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.180219] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.180481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.180712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.180738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.180961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.181218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.181246] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.181469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.181662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.181690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.181946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.182162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.182188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.182386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.182578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.182605] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.182826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.183054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.183080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.183329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.183551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.183578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.183795] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.184055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.184081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.184276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.184495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.184521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.184747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.184989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.185015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.185232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.185484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.185510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.185726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.185939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.185965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.186215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.186411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.186438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.186656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.186878] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.186904] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.187131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.187331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.755 [2024-04-15 02:04:56.187357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.755 qpair failed and we were unable to recover it. 00:30:10.755 [2024-04-15 02:04:56.187552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.187735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.187761] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.187975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.188228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.188254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.188445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.188640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.188668] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.188885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.189112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.189139] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.189335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.189554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.189580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.189811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.190260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.190691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190921] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.190947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.191145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.191395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.191422] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.191639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.191831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.191857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.192059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.192302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.192329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.192530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.192724] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.192751] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.192949] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.193144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.193171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.193419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.193609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.193635] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.193862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.194058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.194086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.194284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.194510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.194536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.194765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.194987] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.195014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.195249] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.195507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.195534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.195727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.195954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.195979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.196206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.196429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.196455] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.196676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.196874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.196902] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.197126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.197345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.197371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.197583] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.197775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.197802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.198029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.198243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.198271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.198461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.198683] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.198710] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.198928] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.199121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.199147] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.199369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.199593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.199618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.199810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.200031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.200066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.756 [2024-04-15 02:04:56.200300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.200525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.756 [2024-04-15 02:04:56.200551] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.756 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.200753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.200981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.201009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.201294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.201493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.201520] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.201706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.201956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.201982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.202213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.202403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.202429] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.202679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.202871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.202899] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.203123] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.203343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.203370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.203590] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.203781] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.203806] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.204021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.204222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.204248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.204443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.204641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.204670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.204864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.205091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.205119] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.205344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.205568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.205594] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.205828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.206055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.206084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.206312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.206530] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.206556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.206775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.206998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.207024] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.207230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.207431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.207462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.207666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.207855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.207881] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.208085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.208274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.208312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.208559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.208741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.208767] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.208997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.209229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.209255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.209482] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.209702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.209728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.209980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.210174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.210200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.210431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.210645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.210671] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.210918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.211144] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.211171] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.211374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.211589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.211615] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.757 qpair failed and we were unable to recover it. 00:30:10.757 [2024-04-15 02:04:56.211844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.757 [2024-04-15 02:04:56.212068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.212109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.212329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.212577] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.212602] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.212798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213058] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.213253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.213720] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.213957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.214150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.214381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.214407] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.214630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.214849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.214875] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.215101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.215322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.215348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.215534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.215749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.215775] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.215995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.216220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.216247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.216479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.216693] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.216724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.216955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.217157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.217186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.217411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.217607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.217633] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.217849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.218308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.218738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.218971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.219189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.219419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.219445] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.219667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.219886] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.219912] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.220111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.220311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.220337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.220564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.220789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.220815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.221014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.221223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.221254] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.221480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.221687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.221713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.221903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.222106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.222133] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.222366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.222593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.222620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.222845] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.223090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.223123] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.223321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.223537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.223562] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.223753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224037] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.224273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.224743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.758 [2024-04-15 02:04:56.224989] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.758 qpair failed and we were unable to recover it. 00:30:10.758 [2024-04-15 02:04:56.225213] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.225412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.225439] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.225632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.225827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.225853] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.226080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.226273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.226299] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.226534] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.226721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.226748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.226975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.227168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.227195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.227421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.227605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.227632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.227857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.228052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.228079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.228279] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.228505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.228532] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.228753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.228974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.229001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.229209] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.229405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.229432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.229656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.229876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.229903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.230113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.230318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.230352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.230557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.230756] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.230785] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.231003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.231246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.231272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.231539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.231762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.231789] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.232009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.232215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.232242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.232465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.232660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.232686] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.232883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.233090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.233125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.233347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.233548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.233575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.233790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.233985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.234012] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.234211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.234430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.234457] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.234640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.234873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.234901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.235128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.235322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.235353] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.235546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.235735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.235762] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.235985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.236224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.236251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.236484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.236706] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.236732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.236977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.237183] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.237211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.237431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.237621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.237647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.759 [2024-04-15 02:04:56.237865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.238061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.759 [2024-04-15 02:04:56.238088] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.759 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.238291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.238491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.238518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.238718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.238942] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.238969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.239212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.239400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.239430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.239624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.239813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.239845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.240070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.240292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.240318] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.240512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.240731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.240756] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.240954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.241172] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.241200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.241417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.241644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.241669] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.241867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.242095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.242120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.242314] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.242538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.242563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.242787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243031] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.243264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243517] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.243719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.243950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.244176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.244393] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.244418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.244609] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.244798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.244823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.245009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.245243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.245268] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.245492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.245691] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.245716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.245931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.246128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.246154] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.246379] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.246567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.246592] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.246813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247008] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.247222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.247657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.247882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.248097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.248291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.248317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.248546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.248742] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.248768] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.248976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.249224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.249250] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.249443] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.249640] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.249665] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.249889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250084] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250109] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.250331] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.760 [2024-04-15 02:04:56.250736] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.760 [2024-04-15 02:04:56.250950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.760 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.251140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.251334] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.251359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.251560] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.251815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.251840] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.252024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.252222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.252248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.252474] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.252721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.252746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.252969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.253168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.253194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.253391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.253589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.253614] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.253804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254042] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.254248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.254656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254844] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.254869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.255067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.255259] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.255285] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.255500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.255698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.255723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.255916] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.256170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.256196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.256438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.256660] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.256685] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.256880] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.257105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.257131] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.257325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.257559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.257585] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.257809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.257992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.258017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.258219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.258437] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.258462] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.258699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.258915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.258939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.259139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.259365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.259390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.259589] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.259807] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.259832] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.260077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.260296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.260321] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.260545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.260763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.260788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.260975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.261178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.261203] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.261420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.261621] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.261646] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.261836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262061] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.262290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.262726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.262943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.761 qpair failed and we were unable to recover it. 00:30:10.761 [2024-04-15 02:04:56.263190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.263383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.761 [2024-04-15 02:04:56.263408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.263630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.263854] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.263879] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.264103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.264300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.264327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.264512] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.264702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.264727] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.264922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.265118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.265145] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.265343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.265570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.265596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.265786] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.266286] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.266725] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.266942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.267135] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.267324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.267349] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.267569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.267759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.267784] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.267976] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.268170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.268196] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.268418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.268607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.268632] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.268825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269098] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.269316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269563] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.269751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269950] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.269977] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.270181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.270402] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.270427] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.270615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.270838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.270863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.271061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.271290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.271317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.271510] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.271709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.271734] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.271947] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.272140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.272165] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.272359] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.272585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.272610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.272797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.272990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.273017] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.273225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.273423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.273448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.273671] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.273866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.273891] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.762 [2024-04-15 02:04:56.274081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.274308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.762 [2024-04-15 02:04:56.274333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.762 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.274556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.274790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.274815] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.275037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.275241] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.275266] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.275453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.275636] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.275661] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.275885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.276091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.276117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.276340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.276562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.276588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.276806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.276997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.277022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.277232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.277428] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.277454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.277642] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.277831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.277855] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.278100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.278290] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.278315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.278515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.278734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.278759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.278983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.279174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.279200] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.279418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.279612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.279641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.279858] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.280054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.280079] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.280328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.280519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.280544] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.280791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.281273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.281713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.281955] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.282180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.282392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.282417] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.282611] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.282831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.282856] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.283056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.283245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.283271] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.283468] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.283714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.283739] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.283932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.284128] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.284158] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.284380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.284595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.284620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.284838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.285254] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.285719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.285962] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.286161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.286386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.286413] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.286625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.286817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.286842] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.763 qpair failed and we were unable to recover it. 00:30:10.763 [2024-04-15 02:04:56.287068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.287287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.763 [2024-04-15 02:04:56.287312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.287533] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.287722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.287746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.287933] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.288159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.288184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.288378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.288602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.288631] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.288885] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.289137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.289163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.289360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.289550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.289576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.289819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290035] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.290263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290460] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290485] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.290674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290870] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.290895] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.291098] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.291284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.291308] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.291524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.291714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.291740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.291953] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.292168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.292193] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.292412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.292597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.292622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.292802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.293052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.293082] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.293312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.293508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.293533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.293749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.293995] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.294020] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.294277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.294470] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.294494] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.294713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.294932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.294957] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.295179] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.295368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.295393] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.295643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.295865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.295890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.296111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.296329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.296354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.296569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.296789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.296814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.297030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.297230] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.297255] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.297457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.297705] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.297729] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.297985] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.298208] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.298234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.298433] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.298617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.298642] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.298832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299024] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.299277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299516] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.299699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.299936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.764 qpair failed and we were unable to recover it. 00:30:10.764 [2024-04-15 02:04:56.300166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.764 [2024-04-15 02:04:56.300361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.300386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.300610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.300834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.300859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.301077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.301304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.301329] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.301516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.301733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.301758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.301972] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.302191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.302217] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.302441] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.302657] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.302682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.302872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.303090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.303116] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.303316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.303536] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.303561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.303808] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.303998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.304025] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.304220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.304466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.304491] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.304709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.304954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.304979] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.305176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.305406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.305431] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.305653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.305838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.305863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.306051] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.306268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.306293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.306476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.306696] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.306722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.306954] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.307174] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.307199] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.307418] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.307600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.307624] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.307838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.308270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308464] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308489] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.308686] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308872] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.308897] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.309118] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.309338] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.309363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.309595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.309777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.309802] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.310014] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.310206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.310232] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.310427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.310618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.310645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.310830] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.311052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.311078] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.311280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.311531] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.311557] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.311755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.311975] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.312000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.312252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.312438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.312463] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.765 qpair failed and we were unable to recover it. 00:30:10.765 [2024-04-15 02:04:56.312673] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.312855] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.765 [2024-04-15 02:04:56.312880] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.313078] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.313307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.313332] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.313580] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.313826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.313851] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.314052] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.314252] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.314277] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.314492] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.314681] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.314706] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.314923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.315112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.315137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.315357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.315544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.315568] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.315816] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316034] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.316258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316470] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.316658] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.316901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.317124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.317350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.317376] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.317597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.317789] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.317814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.318029] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.318225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.318251] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.318440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.318666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.318691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.318876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.319103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.319128] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.319329] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.319551] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.319576] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.319800] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320041] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.320253] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.320689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.320906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.321095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.321318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.321343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.321557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.321746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.321771] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.321993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.322184] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.322211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.322400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.322630] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.322655] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.322865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.323097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.323122] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.766 [2024-04-15 02:04:56.323372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.323595] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.766 [2024-04-15 02:04:56.323620] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.766 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.323838] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.324302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324545] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.324733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.324971] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.325189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.325378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.325404] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.325600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.325819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.325843] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.326061] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.326255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.326280] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.326526] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.326713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.326738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.326951] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.327141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.327166] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.327363] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.327555] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.327580] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.327828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328049] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.328258] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328478] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.328665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328882] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.328906] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.329131] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.329320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.329345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.329594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.329842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.329866] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.330101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.330349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.330375] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.330569] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.330787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.330812] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.331031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.331234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.331260] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.331506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.331694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.331719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.331908] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.332130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.332157] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.332358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.332545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.332569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.332790] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.333255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.333713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.333947] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.334146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.334337] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.334362] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.334553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.334778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.334803] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.335020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.335229] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.335256] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.335451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.335641] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.335667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.335848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.336075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.336101] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.767 qpair failed and we were unable to recover it. 00:30:10.767 [2024-04-15 02:04:56.336320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.767 [2024-04-15 02:04:56.336511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.336536] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.336757] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.336978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.337002] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.337256] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.337449] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.337474] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.337698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.337889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.337915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.338141] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.338336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.338361] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.338582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.338777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.338805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.339031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.339222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.339247] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.339450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.339667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.339692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.339879] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.340100] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.340125] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.340354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.340552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.340578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.340803] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341044] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.341238] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.341665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.341878] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.342106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.342325] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.342352] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.342541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.342768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.342793] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.343012] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.343220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.343245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.343467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.343698] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.343723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.343929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.344149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.344175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.344368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.344558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.344583] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.344769] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345013] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345052] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.345280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.345747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.345968] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.346157] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.346370] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.346394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.346593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.346817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.346844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.347062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.347283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.347315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.347541] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.347755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.347780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.347993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.348234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.348261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.348477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.348745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.348770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.348988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.349223] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.768 [2024-04-15 02:04:56.349248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.768 qpair failed and we were unable to recover it. 00:30:10.768 [2024-04-15 02:04:56.349438] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.349669] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.349693] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.349920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.350114] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.350141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.350365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.350548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.350572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.350797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351056] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.351278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351500] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.351717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.351943] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.352148] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.352343] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.352370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.352599] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.352791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.352816] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.353038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.353261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.353286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.353513] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.353734] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.353759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.353977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.354198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.354224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.354419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.354665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.354690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.354888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.355113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.355138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.355360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.355558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.355584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.355810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.356060] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.356086] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.356288] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.356519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.356548] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.356743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.356990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.357015] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.357228] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.357413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.357438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.357654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.357871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.357896] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.358095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.358319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.358354] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.358585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.358833] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.358858] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.359070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.359294] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.359320] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.359518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.359730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.359755] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.360003] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.360236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.360261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.360489] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.360712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.360737] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.360929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.361151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.361182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.361407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.361628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.361654] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.361867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.362088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.362113] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.769 [2024-04-15 02:04:56.362304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.362498] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.769 [2024-04-15 02:04:56.362523] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.769 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.362771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.362984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.363009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.363234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.363457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.363482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.363699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.363913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.363938] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.364194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.364413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.364438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.364653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.364876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.364901] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.365097] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.365319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.365345] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.365574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.365771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.365795] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.365978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.366194] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.366221] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.366413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.366625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.366649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.366861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.367297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367501] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367527] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.367775] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.367988] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.368211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.368403] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.368430] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.368649] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.368866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.368893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.369142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.369333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.369359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.369558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.369812] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.369838] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.370056] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.370300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.370326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.370527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.370748] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.370773] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.370988] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.371182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.371208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.371406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.371625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.371650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.371867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.372112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.372137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.372361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.372548] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.372574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.372773] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373022] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.373306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373528] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.373747] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.373960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.374177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.374421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.374446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.374634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.374831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.374857] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.375057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.375276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.375301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.770 [2024-04-15 02:04:56.375519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.375762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.770 [2024-04-15 02:04:56.375787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.770 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.375980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.376187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.376213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.376406] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.376603] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.376630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.376817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.377036] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.377066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.377287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.377516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.377542] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.377731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.377978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.378003] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.378271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.378522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.378547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.378768] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.378957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.378981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.379211] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.379407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.379432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.379646] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.379864] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.379889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.380137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.380336] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.380369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.380566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.380784] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.380810] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.381006] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.381255] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.381281] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.381508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.381701] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.381728] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.381944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.382137] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.382163] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.382360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.382553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:10.771 [2024-04-15 02:04:56.382578] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:10.771 qpair failed and we were unable to recover it. 00:30:10.771 [2024-04-15 02:04:56.382805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.382990] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.383016] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.383218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.383420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.383446] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.383632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.383819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.383845] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.384042] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.384264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.384290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.384500] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.384695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.384723] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.384943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.385168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.038 [2024-04-15 02:04:56.385195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.038 qpair failed and we were unable to recover it. 00:30:11.038 [2024-04-15 02:04:56.385417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.385662] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.385687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.385918] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.386110] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.386136] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.386341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.386557] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.386582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.386806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.387041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.387071] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.387273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.387499] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.387525] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.387719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.387974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.388000] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.388224] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.388471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.388496] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.388746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.388935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.388960] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.389178] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.389369] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.389394] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.389639] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.389860] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.389884] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.390105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.390322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.390355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.390572] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.390766] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.390791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.390984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.391176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.391202] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.391391] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.391581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.391606] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.391791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392036] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.392235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392457] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392482] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.392699] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392917] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.392942] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.393136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.393360] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.393385] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.393619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.393827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.393852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.394068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.394261] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.394286] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.394494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.394678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.394703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.394893] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395138] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.395354] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.395751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395938] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.395965] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.396180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.396425] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.396450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.396674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.396897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.396924] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.397143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.397356] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.397381] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.397582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.397772] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.039 [2024-04-15 02:04:56.397797] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.039 qpair failed and we were unable to recover it. 00:30:11.039 [2024-04-15 02:04:56.398011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.398220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.398245] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.398467] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.398654] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.398679] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.398873] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399070] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399096] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.399318] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.399750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.399980] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.400170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.400366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.400391] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.400578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.400792] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.400817] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.401039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.401269] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.401294] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.401514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.401758] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.401783] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.401977] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.402175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.402201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.402427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.402627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.402653] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.402867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403090] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403117] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.403306] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403503] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403530] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.403751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.403990] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.404212] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.404426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.404451] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.404675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.404863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.404888] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.405140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.405361] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.405386] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.405606] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.405804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.405829] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.406059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.406246] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.406272] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.406466] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.406659] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.406687] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.406905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.407339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.407782] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.407986] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.408232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.408416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.408440] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.408668] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.408848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.408873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.409125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.409346] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.409371] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.409566] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.409788] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.409814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.410011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.410260] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.410287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.040 qpair failed and we were unable to recover it. 00:30:11.040 [2024-04-15 02:04:56.410511] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.040 [2024-04-15 02:04:56.410702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.410730] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.410948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.411149] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.411175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.411401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.411622] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.411647] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.411865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412055] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412081] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.412268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412517] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412541] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.412727] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.412969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.413171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.413392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.413418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.413600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.413819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.413844] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.414031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.414236] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.414261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.414484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.414700] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.414726] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.414919] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.415167] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.415194] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.415414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.415667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.415692] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.415915] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.416161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.416186] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.416381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.416627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.416652] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.416867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.417280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417503] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.417728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417926] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.417954] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.418176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.418408] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.418433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.418626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.418843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.418867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.419091] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.419312] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.419337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.419553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.419763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.419788] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.419980] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.420232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.420262] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.420491] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.420682] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.420707] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.420935] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.421161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.421187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.421395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.421594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.421618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.421805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422072] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.422267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422480] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.041 qpair failed and we were unable to recover it. 00:30:11.041 [2024-04-15 02:04:56.422667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422892] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.041 [2024-04-15 02:04:56.422917] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.423136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.423335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.423363] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.423575] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.423797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.423823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.424043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.424264] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.424290] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.424525] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.424735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.424765] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.424974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.425200] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.425228] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.425419] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.425666] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.425691] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.425891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.426136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.426162] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.426384] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.426571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.426596] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.426811] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427059] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.427263] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427450] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427476] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.427723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.427939] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.428164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.428412] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.428437] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.428628] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.428857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.428882] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.429103] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.429324] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.429355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.429574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.429797] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.429822] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.430016] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.430217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.430243] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.430461] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.430653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.430678] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.430898] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.431339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431529] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431556] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.431753] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431969] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.431994] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.432217] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.432407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.432433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.432655] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.432881] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.432907] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.433130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.433345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.433370] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.433593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.433777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.433807] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.434031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.434280] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.434306] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.434520] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.434709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.434736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.434958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.435181] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.435207] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.042 qpair failed and we were unable to recover it. 00:30:11.042 [2024-04-15 02:04:56.435452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.042 [2024-04-15 02:04:56.435670] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.435695] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.435910] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.436108] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.436134] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.436378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.436576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.436603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.436849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437034] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437074] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.437291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437515] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.437735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.437981] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.438176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.438426] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.438452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.438653] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.438903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.438929] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.439126] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.439319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.439344] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.439567] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.439787] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.439814] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.440039] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.440268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.440293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.440518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.440741] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.440766] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.440955] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.441350] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441545] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441572] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.441762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441958] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.441982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.442198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.442411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.442436] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.442617] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.442836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.442861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.443111] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.443300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.443325] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.443519] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.443735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.443760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.043 qpair failed and we were unable to recover it. 00:30:11.043 [2024-04-15 02:04:56.443974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.043 [2024-04-15 02:04:56.444161] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.444187] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.444376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.444620] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.444645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.444875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.445094] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.445120] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.445316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.445535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.445561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.445763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446033] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.446268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446481] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.446728] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446920] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.446946] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.447142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.447340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.447366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.447592] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.447783] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.447808] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.448000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.448222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.448248] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.448471] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.448689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.448713] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.448931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.449122] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.449148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.449364] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.449586] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.449611] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.449831] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450023] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450054] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.450248] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450454] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.450644] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450863] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.450890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.451106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.451300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.451326] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.451546] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.451729] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.451754] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.451984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.452204] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.452229] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.452423] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.452643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.452670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.452871] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.453121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.453148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.453365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.453563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.453591] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.453843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.454027] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.454060] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.454308] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.454524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.454561] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.454777] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.454998] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.455023] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.455221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.455442] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.455467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.455656] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.455877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.455903] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.044 qpair failed and we were unable to recover it. 00:30:11.044 [2024-04-15 02:04:56.456124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.456341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.044 [2024-04-15 02:04:56.456366] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.456559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.456780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.456805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.457020] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.457218] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.457244] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.457462] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.457677] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.457702] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.457943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.458162] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.458188] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.458395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.458618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.458643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.458869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459088] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.459311] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459502] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459529] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.459722] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.459969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.460187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.460409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.460434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.460623] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.460835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.460861] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.461057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.461277] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.461303] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.461524] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.461708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.461733] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.461922] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.462142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.462168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.462417] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.462665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.462690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.462876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.463096] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.463121] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.463340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.463558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.463582] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.463770] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.463984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.464008] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.464237] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.464451] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.464475] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.464675] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.464862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.464886] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.465077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.465299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.465324] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.465515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.465704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.465731] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.465923] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.466150] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.466176] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.466397] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.466619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.466645] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.466835] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467059] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.467282] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467480] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.467733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.467983] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.468176] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.468392] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.468418] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.045 qpair failed and we were unable to recover it. 00:30:11.045 [2024-04-15 02:04:56.468612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.045 [2024-04-15 02:04:56.468832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.468859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.469053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.469245] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.469270] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.469515] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.469716] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.469741] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.469970] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.470168] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.470195] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.470386] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.470578] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.470603] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.470794] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.471038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.471075] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.471273] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.471493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.471518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.471738] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.471974] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.472001] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.472231] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.472453] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.472479] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.472704] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.472905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.472930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.473125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.473373] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.473399] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.473619] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.473843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.473869] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.474068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.474321] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.474348] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.474540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.474760] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.474786] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.474979] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.475203] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.475230] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.475454] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.475648] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.475674] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.475909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.476113] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.476141] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.476349] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.476538] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.476564] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.476791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.476981] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.477006] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.477199] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.477395] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.477421] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.477614] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.477836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.477864] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.478087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.478327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.478355] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.478574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.478761] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.478787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.479019] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.479222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.479249] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.479440] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.479665] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.479690] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.479913] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.480120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.480148] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.480345] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.480543] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.480569] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.046 qpair failed and we were unable to recover it. 00:30:11.046 [2024-04-15 02:04:56.480759] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.480996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.046 [2024-04-15 02:04:56.481022] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.481247] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.481469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.481495] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.481712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.481905] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.481932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.482133] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.482332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.482358] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.482552] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.482751] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.482778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.483037] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.483268] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.483293] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.483514] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.483712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.483740] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.483968] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.484185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.484211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.484414] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.484632] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.484657] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.484869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485066] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485093] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.485287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485498] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.485689] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.485950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.486143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.486333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.486359] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.486550] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.486750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.486776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.487005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.487242] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.487269] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.487494] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.487690] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.487716] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.487934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.488158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.488184] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.488400] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.488637] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.488662] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.488889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.489296] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489495] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489521] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.489711] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489907] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.489932] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.490169] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.490439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.490465] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.490697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.490887] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.490913] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.491139] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.491344] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.491369] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.491554] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.491750] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.491776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.491997] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.492215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.492241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.492430] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.492652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.492682] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.492883] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.493079] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.493106] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.047 [2024-04-15 02:04:56.493339] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.493527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.047 [2024-04-15 02:04:56.493552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.047 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.493743] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.493964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.493991] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.494227] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.494431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.494458] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.494712] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.494934] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.494961] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.495189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.495424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.495450] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.495676] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.495896] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.495922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.496170] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.496421] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.496447] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.496634] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.496865] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.496892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.497142] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.497355] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.497397] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.497596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.497827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.497854] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.498077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.498275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.498301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.498521] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.498713] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.498738] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.498932] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.499156] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.499182] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.499378] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.499598] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.499623] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.499857] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.500087] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.500114] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.500328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.500558] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.500584] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.500791] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.500983] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.501009] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.501206] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.501804] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.501830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.502011] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.502220] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 02:04:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:11.048 [2024-04-15 02:04:56.502252] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 02:04:56 -- common/autotest_common.sh@852 -- # return 0 00:30:11.048 [2024-04-15 02:04:56.502452] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 02:04:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:11.048 02:04:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:11.048 [2024-04-15 02:04:56.502674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.502700] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.048 [2024-04-15 02:04:56.502909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.503151] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.503190] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.503413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.503615] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.503641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.048 [2024-04-15 02:04:56.503843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.504050] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.048 [2024-04-15 02:04:56.504076] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.048 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.504307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.504508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.504533] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.504740] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.504962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.504999] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.505222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.505465] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.505490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.505717] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.505944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.505969] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.506207] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.506401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.506426] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.506635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.506834] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.506859] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.507076] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.507272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.507298] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.507497] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.507752] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.507778] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.507963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.508187] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.508213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.508416] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.508616] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.508643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.508876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.509125] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.509150] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.509347] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.509549] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.509574] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.509793] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.509989] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.510014] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.510219] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.510411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.510435] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.510678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.510900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.510925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.511132] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.511357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.511382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.511605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.511798] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.511823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.512021] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.512215] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.512241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.512431] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.512645] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.512670] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.512900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.513120] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.513146] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.513365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.513568] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.513593] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.513796] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514025] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514067] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.514267] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514463] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514490] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.514708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.514931] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.515193] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.515383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.515408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.515627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.515846] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.515871] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.516068] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.516297] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.516322] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.516556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.516745] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.049 [2024-04-15 02:04:56.516780] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.049 qpair failed and we were unable to recover it. 00:30:11.049 [2024-04-15 02:04:56.516978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.517226] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.517253] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.517455] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.517723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.517748] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.517964] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.518180] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.518206] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.518401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.518625] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.518650] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.518841] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519058] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519084] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.519284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519477] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.519695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519891] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.519921] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f50f4000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.519977] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15ef100 (9): Bad file descriptor 00:30:11.050 02:04:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.050 [2024-04-15 02:04:56.520304] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 02:04:56 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:11.050 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.050 [2024-04-15 02:04:56.520553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.520581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.050 [2024-04-15 02:04:56.520817] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521018] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.521275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521506] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521531] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.521732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.521987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.522189] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.522376] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.522401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.522626] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.522852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.522877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.523101] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.523293] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.523319] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.523516] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.523709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.523736] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.523929] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.524171] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.524198] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.524401] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.524600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.524625] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.524819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.525272] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525510] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.525715] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.525925] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.526116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.526309] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.526335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.526537] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.526731] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.526758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.526984] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.527427] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.527452] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.527643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.527868] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.527893] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.528116] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.528310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.528335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.050 qpair failed and we were unable to recover it. 00:30:11.050 [2024-04-15 02:04:56.528544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.050 [2024-04-15 02:04:56.528739] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.528764] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.528967] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.529185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.529211] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.529411] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.529605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.529630] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.529859] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530054] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530080] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.530271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530458] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530483] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.530678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530867] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.530892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.531075] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.531270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.531296] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.531486] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.531708] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.531732] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.531946] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.532140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.532167] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.532358] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.532585] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.532610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.532825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.533038] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.533069] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.533292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.533540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.533565] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.533810] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534029] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.534222] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534424] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534448] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.534652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534849] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.534876] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.535105] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.535302] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.535327] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.535562] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.535755] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.535782] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.535978] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.536197] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.536224] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.536607] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.536805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.536830] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.537063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.537262] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.537287] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.537527] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.537925] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.537950] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.538146] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.538365] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.538390] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.538605] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.538826] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.538850] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.539063] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.539276] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.539301] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.539535] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.539732] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.539759] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.539959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.540175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.540201] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.540627] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.540890] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.540915] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.051 qpair failed and we were unable to recover it. 00:30:11.051 [2024-04-15 02:04:56.541140] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.541348] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.051 [2024-04-15 02:04:56.541373] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.541594] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.541823] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.541848] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.542080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.542303] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.542335] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.542565] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.542762] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.542787] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.543007] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.543216] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.543242] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.543476] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.543694] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.543719] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.543914] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.544130] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.544156] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.544374] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.544597] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.544622] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.544848] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.545085] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.545111] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 Malloc0 00:30:11.052 [2024-04-15 02:04:56.545341] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.545563] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.545588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.052 [2024-04-15 02:04:56.545813] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 02:04:56 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:11.052 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.052 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.052 [2024-04-15 02:04:56.546074] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.546100] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.546319] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.546556] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.546581] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.546818] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547040] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547077] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.547270] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547493] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547518] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.547733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.547982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.548210] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.548407] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.548432] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.548624] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.548837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.548863] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.549015] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:11.052 [2024-04-15 02:04:56.549106] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.549291] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.549315] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.549518] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.549697] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.549722] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.549912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.550143] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.550168] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.550357] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.550581] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.550607] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.550824] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551017] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551043] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.551271] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551472] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551497] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.551695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551888] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.551918] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.552112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.552307] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.552333] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.552570] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.552778] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.552805] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.552996] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.553234] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.553261] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.553484] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.553674] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.052 [2024-04-15 02:04:56.553698] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.052 qpair failed and we were unable to recover it. 00:30:11.052 [2024-04-15 02:04:56.553889] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554109] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554135] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.554322] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554522] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554547] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.554730] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554948] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.554972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.555190] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.555413] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.555438] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.555652] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.555840] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.555867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.556067] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.556285] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.556314] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.556505] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.556721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.556746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.556959] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.557185] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.557213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.053 02:04:56 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:11.053 [2024-04-15 02:04:56.557432] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.053 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.053 [2024-04-15 02:04:56.557643] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.557667] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.557869] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558062] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558087] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.558289] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558534] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.558721] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558906] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.558930] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.559124] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.559313] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.559337] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.559540] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.559735] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.559760] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.559956] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.560152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.560178] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.560405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.560600] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.560626] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.560815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561072] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561097] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.561300] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561552] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.561749] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561962] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.561987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.562198] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.562445] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.562469] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.053 qpair failed and we were unable to recover it. 00:30:11.053 [2024-04-15 02:04:56.562714] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.053 [2024-04-15 02:04:56.562941] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.562966] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.563191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.563381] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.563408] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.563602] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.563852] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.563877] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.564077] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.564274] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.564300] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.564553] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.564746] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.564770] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.564992] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.565232] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.565258] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.054 02:04:56 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:11.054 [2024-04-15 02:04:56.565475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.054 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.054 [2024-04-15 02:04:56.565678] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.565703] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.565900] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.566127] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.566153] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.566353] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.566574] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.566598] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.566819] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567010] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567039] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.567243] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567436] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567461] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.567687] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.567936] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.568166] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.568366] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.568392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.568612] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.568828] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.568852] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.569053] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.569292] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.569317] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.569544] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.569765] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.569791] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.570009] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.570250] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.570275] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.570504] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.570723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.570749] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.570961] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.571147] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.571173] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.571396] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.571618] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.571643] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.571866] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572065] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572090] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.572283] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572475] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572501] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.572719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.572963] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.573191] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.054 [2024-04-15 02:04:56.573409] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.573434] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 02:04:56 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:11.054 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.054 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.054 [2024-04-15 02:04:56.573679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.573897] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.573922] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.054 qpair failed and we were unable to recover it. 00:30:11.054 [2024-04-15 02:04:56.574112] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.574332] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.054 [2024-04-15 02:04:56.574357] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.574542] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.574733] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.574758] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.574973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.575188] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.575213] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.575405] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.575593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.575618] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.575843] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576083] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.576284] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576479] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576505] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.576695] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576912] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.576937] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.577152] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.577351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:11.055 [2024-04-15 02:04:56.577366] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:11.055 [2024-04-15 02:04:56.577377] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5104000b90 with addr=10.0.0.2, port=4420 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.579800] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.580022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.580060] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.580077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.580089] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.580123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.055 02:04:56 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:11.055 02:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:11.055 02:04:56 -- common/autotest_common.sh@10 -- # set +x 00:30:11.055 02:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:11.055 02:04:56 -- host/target_disconnect.sh@58 -- # wait 2288164 00:30:11.055 [2024-04-15 02:04:56.589699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.589908] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.589935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.589950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.589961] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.589990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.599722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.599920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.599947] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.599961] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.599973] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.600001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.609698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.609896] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.609922] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.609936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.609948] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.609976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.619709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.619964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.619991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.620005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.620017] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.620053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.629752] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.629964] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.629990] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.630004] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.630016] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.630051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.639776] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.639968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.639994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.640008] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.640020] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.640057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.649747] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.649946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.649972] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.649985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.055 [2024-04-15 02:04:56.649997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.055 [2024-04-15 02:04:56.650026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.055 qpair failed and we were unable to recover it. 00:30:11.055 [2024-04-15 02:04:56.659774] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.055 [2024-04-15 02:04:56.660017] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.055 [2024-04-15 02:04:56.660043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.055 [2024-04-15 02:04:56.660067] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.056 [2024-04-15 02:04:56.660085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.056 [2024-04-15 02:04:56.660115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.056 qpair failed and we were unable to recover it. 00:30:11.056 [2024-04-15 02:04:56.669814] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.056 [2024-04-15 02:04:56.670012] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.056 [2024-04-15 02:04:56.670040] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.056 [2024-04-15 02:04:56.670062] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.056 [2024-04-15 02:04:56.670074] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.056 [2024-04-15 02:04:56.670103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.056 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.679941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.680194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.680221] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.680235] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.680246] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.680275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.689839] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.690035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.690068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.690082] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.690095] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.690123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.699867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.700074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.700100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.700114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.700126] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.700154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.710033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.710255] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.710281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.710295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.710307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.710336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.720057] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.720304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.720331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.720345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.720356] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.720385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.730016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.730260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.730286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.730300] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.730312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.730341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.740062] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.740257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.740283] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.740297] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.740309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.740337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.750084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.750276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.750303] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.750326] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.750339] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.750369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.760072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.760262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.760289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.760303] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.760314] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.760343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.770078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.770275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.770300] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.770314] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.315 [2024-04-15 02:04:56.770326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.315 [2024-04-15 02:04:56.770355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.315 qpair failed and we were unable to recover it. 00:30:11.315 [2024-04-15 02:04:56.780112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.315 [2024-04-15 02:04:56.780316] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.315 [2024-04-15 02:04:56.780342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.315 [2024-04-15 02:04:56.780356] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.780368] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.780396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.790155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.790347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.790373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.790387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.790399] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.790427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.800175] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.800366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.800392] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.800405] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.800417] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.800446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.810314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.810520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.810545] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.810559] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.810571] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.810599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.820292] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.820527] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.820554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.820569] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.820581] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.820623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.830291] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.830488] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.830515] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.830529] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.830541] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.830570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.840303] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.840497] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.840524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.840544] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.840558] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.840587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.850316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.850526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.850552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.850566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.850578] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.850606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.860413] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.860609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.860635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.860648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.860660] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.860701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.870402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.870606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.870633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.870651] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.870663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.870693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.880447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.880637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.880663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.880677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.880689] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.880719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.890428] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.890624] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.890650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.890664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.890676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.890704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.900463] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.900667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.900693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.900707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.900718] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.900747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.910515] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.316 [2024-04-15 02:04:56.910750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.316 [2024-04-15 02:04:56.910776] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.316 [2024-04-15 02:04:56.910790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.316 [2024-04-15 02:04:56.910802] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5104000b90 00:30:11.316 [2024-04-15 02:04:56.910831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:11.316 qpair failed and we were unable to recover it. 00:30:11.316 [2024-04-15 02:04:56.920630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.317 [2024-04-15 02:04:56.920837] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.317 [2024-04-15 02:04:56.920871] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.317 [2024-04-15 02:04:56.920887] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.317 [2024-04-15 02:04:56.920900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.317 [2024-04-15 02:04:56.920931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.317 qpair failed and we were unable to recover it. 00:30:11.317 [2024-04-15 02:04:56.930577] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.317 [2024-04-15 02:04:56.930776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.317 [2024-04-15 02:04:56.930809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.317 [2024-04-15 02:04:56.930824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.317 [2024-04-15 02:04:56.930836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.317 [2024-04-15 02:04:56.930866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.317 qpair failed and we were unable to recover it. 00:30:11.317 [2024-04-15 02:04:56.940609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.317 [2024-04-15 02:04:56.940802] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.317 [2024-04-15 02:04:56.940829] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.317 [2024-04-15 02:04:56.940844] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.317 [2024-04-15 02:04:56.940856] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.317 [2024-04-15 02:04:56.940885] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.317 qpair failed and we were unable to recover it. 00:30:11.317 [2024-04-15 02:04:56.950610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.317 [2024-04-15 02:04:56.950806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.317 [2024-04-15 02:04:56.950833] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.317 [2024-04-15 02:04:56.950847] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.317 [2024-04-15 02:04:56.950859] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.317 [2024-04-15 02:04:56.950898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.317 qpair failed and we were unable to recover it. 00:30:11.317 [2024-04-15 02:04:56.960679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.317 [2024-04-15 02:04:56.960926] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.317 [2024-04-15 02:04:56.960954] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.317 [2024-04-15 02:04:56.960969] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.317 [2024-04-15 02:04:56.960981] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.317 [2024-04-15 02:04:56.961023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:56.970771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:56.971008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:56.971043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:56.971081] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:56.971103] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:56.971156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:56.980689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:56.980913] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:56.980942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:56.980957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:56.980969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:56.980999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:56.990706] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:56.990893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:56.990920] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:56.990935] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:56.990947] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:56.990976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:57.000734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:57.000929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:57.000956] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:57.000977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:57.000989] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:57.001019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:57.010805] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:57.011033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:57.011068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:57.011084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:57.011096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:57.011126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:57.020913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:57.021128] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:57.021159] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:57.021175] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:57.021187] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:57.021216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:57.030818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:57.031013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:57.031041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.576 [2024-04-15 02:04:57.031065] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.576 [2024-04-15 02:04:57.031078] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.576 [2024-04-15 02:04:57.031108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.576 qpair failed and we were unable to recover it. 00:30:11.576 [2024-04-15 02:04:57.040845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.576 [2024-04-15 02:04:57.041036] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.576 [2024-04-15 02:04:57.041071] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.041087] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.041099] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.041128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.050969] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.051176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.051202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.051216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.051228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.051257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.060901] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.061101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.061128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.061142] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.061155] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.061188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.071035] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.071239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.071265] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.071280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.071292] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.071321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.081010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.081236] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.081263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.081277] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.081289] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.081318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.091088] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.091277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.091302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.091316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.091328] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.091357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.101015] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.101231] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.101257] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.101271] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.101282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.101311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.111033] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.111244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.111270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.111284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.111296] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.111324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.121063] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.121260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.121286] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.121301] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.121312] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.121342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.131109] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.131299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.131336] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.131350] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.131361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.131390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.141122] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.141321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.141347] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.141361] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.141373] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.141401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.151194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.151396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.151423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.151437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.151456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.151487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.161176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.161367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.161393] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.161407] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.161419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.577 [2024-04-15 02:04:57.161448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.577 qpair failed and we were unable to recover it. 00:30:11.577 [2024-04-15 02:04:57.171211] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.577 [2024-04-15 02:04:57.171413] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.577 [2024-04-15 02:04:57.171439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.577 [2024-04-15 02:04:57.171454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.577 [2024-04-15 02:04:57.171466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.171494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.578 [2024-04-15 02:04:57.181301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.578 [2024-04-15 02:04:57.181495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.578 [2024-04-15 02:04:57.181521] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.578 [2024-04-15 02:04:57.181535] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.578 [2024-04-15 02:04:57.181547] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.181576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.578 [2024-04-15 02:04:57.191281] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.578 [2024-04-15 02:04:57.191476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.578 [2024-04-15 02:04:57.191503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.578 [2024-04-15 02:04:57.191520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.578 [2024-04-15 02:04:57.191533] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.191563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.578 [2024-04-15 02:04:57.201385] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.578 [2024-04-15 02:04:57.201602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.578 [2024-04-15 02:04:57.201629] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.578 [2024-04-15 02:04:57.201643] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.578 [2024-04-15 02:04:57.201655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.201684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.578 [2024-04-15 02:04:57.211366] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.578 [2024-04-15 02:04:57.211566] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.578 [2024-04-15 02:04:57.211593] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.578 [2024-04-15 02:04:57.211607] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.578 [2024-04-15 02:04:57.211618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.211648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.578 [2024-04-15 02:04:57.221390] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.578 [2024-04-15 02:04:57.221644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.578 [2024-04-15 02:04:57.221670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.578 [2024-04-15 02:04:57.221684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.578 [2024-04-15 02:04:57.221696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.578 [2024-04-15 02:04:57.221724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.578 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.231414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.231611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.231640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.231655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.231666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.231696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.241501] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.241716] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.241743] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.241763] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.241776] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.241806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.251464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.251704] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.251731] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.251745] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.251757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.251788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.261474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.261666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.261693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.261708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.261719] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.261748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.271529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.271722] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.271748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.271762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.271774] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.271802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.281535] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.281728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.281754] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.281768] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.281780] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.281809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.291598] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.291813] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.291841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.291859] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.291871] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.291900] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.301684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.301879] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.301905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.301920] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.301931] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.301960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.311652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.311856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.311884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.311899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.311914] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.311945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.321640] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.321835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.321862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.321876] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.321888] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.321917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.331678] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.837 [2024-04-15 02:04:57.331878] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.837 [2024-04-15 02:04:57.331905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.837 [2024-04-15 02:04:57.331924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.837 [2024-04-15 02:04:57.331937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.837 [2024-04-15 02:04:57.331966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.837 qpair failed and we were unable to recover it. 00:30:11.837 [2024-04-15 02:04:57.341697] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.341905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.341931] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.341945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.341957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.341985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.351717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.351906] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.351932] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.351945] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.351957] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.351985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.361777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.362008] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.362035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.362057] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.362070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.362100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.371811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.372006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.372032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.372052] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.372066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.372095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.381830] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.382027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.382059] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.382075] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.382087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.382127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.391871] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.392065] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.392091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.392105] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.392117] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.392146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.401886] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.402085] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.402111] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.402125] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.402137] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.402166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.411952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.412154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.412180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.412195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.412206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.412236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.421960] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.422169] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.422201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.422216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.422228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.422257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.432003] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.432203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.432229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.432243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.432255] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.432284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.441998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.442189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.442215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.442228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.442240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.442269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.452071] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.452310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.452335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.452349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.452361] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.452390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.462060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.462298] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.462324] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.838 [2024-04-15 02:04:57.462337] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.838 [2024-04-15 02:04:57.462349] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.838 [2024-04-15 02:04:57.462384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.838 qpair failed and we were unable to recover it. 00:30:11.838 [2024-04-15 02:04:57.472116] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.838 [2024-04-15 02:04:57.472324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.838 [2024-04-15 02:04:57.472350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.839 [2024-04-15 02:04:57.472364] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.839 [2024-04-15 02:04:57.472375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.839 [2024-04-15 02:04:57.472404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.839 qpair failed and we were unable to recover it. 00:30:11.839 [2024-04-15 02:04:57.482128] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:11.839 [2024-04-15 02:04:57.482326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:11.839 [2024-04-15 02:04:57.482353] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:11.839 [2024-04-15 02:04:57.482368] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:11.839 [2024-04-15 02:04:57.482380] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:11.839 [2024-04-15 02:04:57.482411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:11.839 qpair failed and we were unable to recover it. 00:30:12.097 [2024-04-15 02:04:57.492168] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.097 [2024-04-15 02:04:57.492412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.097 [2024-04-15 02:04:57.492439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.097 [2024-04-15 02:04:57.492456] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.097 [2024-04-15 02:04:57.492469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.097 [2024-04-15 02:04:57.492498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.097 qpair failed and we were unable to recover it. 00:30:12.097 [2024-04-15 02:04:57.502268] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.097 [2024-04-15 02:04:57.502470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.097 [2024-04-15 02:04:57.502495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.097 [2024-04-15 02:04:57.502510] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.097 [2024-04-15 02:04:57.502522] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.097 [2024-04-15 02:04:57.502551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.097 qpair failed and we were unable to recover it. 00:30:12.097 [2024-04-15 02:04:57.512195] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.512400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.512430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.512445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.512457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.512486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.522253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.522510] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.522539] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.522555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.522567] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.522612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.532314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.532514] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.532540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.532553] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.532566] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.532595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.542347] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.542545] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.542570] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.542585] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.542598] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.542628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.552362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.552548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.552573] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.552588] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.552601] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.552636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.562365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.562559] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.562587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.562603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.562615] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.562646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.572427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.572629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.572654] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.572670] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.572683] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.572712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.582453] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.582700] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.582727] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.582743] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.582757] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.582786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.592461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.592654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.592680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.592694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.592707] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.592737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.602464] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.602658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.602688] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.602703] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.602717] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.602746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.612502] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.612698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.612723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.612737] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.612749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.612778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.622529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.622731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.622756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.622770] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.622784] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.622813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.632564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.632795] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.098 [2024-04-15 02:04:57.632822] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.098 [2024-04-15 02:04:57.632838] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.098 [2024-04-15 02:04:57.632851] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.098 [2024-04-15 02:04:57.632879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.098 qpair failed and we were unable to recover it. 00:30:12.098 [2024-04-15 02:04:57.642609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.098 [2024-04-15 02:04:57.642797] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.642825] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.642856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.642874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.642920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.652619] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.652815] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.652840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.652854] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.652867] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.652896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.662701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.662897] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.662921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.662936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.662950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.662979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.672795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.672993] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.673017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.673032] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.673051] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.673084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.682735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.682940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.682984] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.682999] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.683012] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.683076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.692751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.693004] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.693032] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.693056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.693072] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.693103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.702777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.702966] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.702991] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.703005] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.703018] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.703054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.712785] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.712980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.713005] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.713019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.713032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.713068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.722909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.723107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.723133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.723148] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.723171] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.723215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.732898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.733140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.733169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.733185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.733204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.733236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.099 [2024-04-15 02:04:57.742968] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.099 [2024-04-15 02:04:57.743176] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.099 [2024-04-15 02:04:57.743202] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.099 [2024-04-15 02:04:57.743217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.099 [2024-04-15 02:04:57.743230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.099 [2024-04-15 02:04:57.743260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.099 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.752931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.753131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.753157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.753171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.753184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.753213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.762955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.763158] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.763184] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.763198] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.763211] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.763242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.772974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.773214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.773241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.773256] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.773270] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.773299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.783032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.783266] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.783294] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.783309] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.783322] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.783354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.793056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.793286] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.793315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.793330] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.793344] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.793374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.803086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.803277] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.803302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.803317] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.803330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.803361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.813107] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.813309] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.813335] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.813349] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.813362] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.813391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.823137] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.358 [2024-04-15 02:04:57.823347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.358 [2024-04-15 02:04:57.823375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.358 [2024-04-15 02:04:57.823396] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.358 [2024-04-15 02:04:57.823410] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.358 [2024-04-15 02:04:57.823440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.358 qpair failed and we were unable to recover it. 00:30:12.358 [2024-04-15 02:04:57.833144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.833348] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.833375] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.833390] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.833402] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.833431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.843215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.843416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.843441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.843455] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.843469] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.843498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.853262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.853467] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.853492] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.853507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.853520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.853549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.863305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.863537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.863562] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.863576] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.863589] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.863619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.873290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.873532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.873560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.873575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.873588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.873632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.883299] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.883558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.883586] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.883601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.883618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.883663] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.893365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.893588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.893615] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.893631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.893644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.893674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.903344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.903539] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.903565] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.903579] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.903593] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.903622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.913398] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.913600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.913633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.913649] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.913662] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.913692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.923417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.923618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.923643] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.923658] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.923671] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.923701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.933489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.933731] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.933759] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.933774] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.933787] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.933816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.943462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.943687] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.943714] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.943729] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.943743] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.943772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.953557] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.953756] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.953781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.953795] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.359 [2024-04-15 02:04:57.953809] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.359 [2024-04-15 02:04:57.953839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.359 qpair failed and we were unable to recover it. 00:30:12.359 [2024-04-15 02:04:57.963529] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.359 [2024-04-15 02:04:57.963778] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.359 [2024-04-15 02:04:57.963806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.359 [2024-04-15 02:04:57.963822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.360 [2024-04-15 02:04:57.963834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.360 [2024-04-15 02:04:57.963864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-04-15 02:04:57.973605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.360 [2024-04-15 02:04:57.973809] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.360 [2024-04-15 02:04:57.973835] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.360 [2024-04-15 02:04:57.973850] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.360 [2024-04-15 02:04:57.973862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.360 [2024-04-15 02:04:57.973890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-04-15 02:04:57.983690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.360 [2024-04-15 02:04:57.983893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.360 [2024-04-15 02:04:57.983921] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.360 [2024-04-15 02:04:57.983936] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.360 [2024-04-15 02:04:57.983950] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.360 [2024-04-15 02:04:57.983980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-04-15 02:04:57.993658] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.360 [2024-04-15 02:04:57.993899] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.360 [2024-04-15 02:04:57.993928] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.360 [2024-04-15 02:04:57.993943] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.360 [2024-04-15 02:04:57.993956] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.360 [2024-04-15 02:04:57.993986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.360 [2024-04-15 02:04:58.003665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.360 [2024-04-15 02:04:58.003863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.360 [2024-04-15 02:04:58.003894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.360 [2024-04-15 02:04:58.003909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.360 [2024-04-15 02:04:58.003922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.360 [2024-04-15 02:04:58.003951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.360 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.013733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.013975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.014003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.014018] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.014032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.014068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.023777] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.024022] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.024055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.024072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.024085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.024115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.033742] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.033980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.034007] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.034022] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.034035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.034071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.043758] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.043946] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.043971] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.043985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.043998] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.044053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.053845] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.054074] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.054100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.054114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.054127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.054156] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.063813] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.064070] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.064099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.064115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.064128] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.619 [2024-04-15 02:04:58.064160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.619 qpair failed and we were unable to recover it. 00:30:12.619 [2024-04-15 02:04:58.073910] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.619 [2024-04-15 02:04:58.074138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.619 [2024-04-15 02:04:58.074175] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.619 [2024-04-15 02:04:58.074191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.619 [2024-04-15 02:04:58.074205] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.074234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.083895] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.084114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.084140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.084154] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.084168] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.084198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.093922] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.094126] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.094157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.094172] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.094184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.094216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.103942] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.104151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.104177] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.104191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.104204] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.104234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.113986] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.114184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.114209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.114224] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.114237] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.114267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.124000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.124200] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.124228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.124243] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.124256] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.124286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.134031] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.134239] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.134266] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.134280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.134299] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.134330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.144072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.144275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.144302] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.144318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.144330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.144360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.154098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.154369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.154396] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.154411] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.154423] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.154452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.164139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.164377] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.164403] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.164419] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.164431] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.164460] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.174149] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.174404] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.174430] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.174445] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.174456] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.174486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.184226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.184438] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.184465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.184480] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.184493] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.184521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.194223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.194420] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.194446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.194461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.194473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.194502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.204323] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.204536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.620 [2024-04-15 02:04:58.204563] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.620 [2024-04-15 02:04:58.204578] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.620 [2024-04-15 02:04:58.204590] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.620 [2024-04-15 02:04:58.204620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.620 qpair failed and we were unable to recover it. 00:30:12.620 [2024-04-15 02:04:58.214306] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.620 [2024-04-15 02:04:58.214511] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.214537] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.214552] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.214565] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.214593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.621 [2024-04-15 02:04:58.224328] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.621 [2024-04-15 02:04:58.224587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.224623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.224648] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.224680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.224734] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.621 [2024-04-15 02:04:58.234365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.621 [2024-04-15 02:04:58.234612] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.234640] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.234655] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.234667] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.234697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.621 [2024-04-15 02:04:58.244374] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.621 [2024-04-15 02:04:58.244575] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.244602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.244616] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.244628] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.244657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.621 [2024-04-15 02:04:58.254473] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.621 [2024-04-15 02:04:58.254683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.254710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.254725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.254737] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.254766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.621 [2024-04-15 02:04:58.264415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.621 [2024-04-15 02:04:58.264619] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.621 [2024-04-15 02:04:58.264646] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.621 [2024-04-15 02:04:58.264660] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.621 [2024-04-15 02:04:58.264672] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.621 [2024-04-15 02:04:58.264702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.621 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.274489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.880 [2024-04-15 02:04:58.274744] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.880 [2024-04-15 02:04:58.274771] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.880 [2024-04-15 02:04:58.274786] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.880 [2024-04-15 02:04:58.274799] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.880 [2024-04-15 02:04:58.274829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.880 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.284601] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.880 [2024-04-15 02:04:58.284806] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.880 [2024-04-15 02:04:58.284847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.880 [2024-04-15 02:04:58.284863] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.880 [2024-04-15 02:04:58.284875] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.880 [2024-04-15 02:04:58.284904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.880 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.294560] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.880 [2024-04-15 02:04:58.294794] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.880 [2024-04-15 02:04:58.294821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.880 [2024-04-15 02:04:58.294836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.880 [2024-04-15 02:04:58.294848] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.880 [2024-04-15 02:04:58.294878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.880 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.304522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.880 [2024-04-15 02:04:58.304721] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.880 [2024-04-15 02:04:58.304748] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.880 [2024-04-15 02:04:58.304762] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.880 [2024-04-15 02:04:58.304775] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.880 [2024-04-15 02:04:58.304803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.880 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.314542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.880 [2024-04-15 02:04:58.314751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.880 [2024-04-15 02:04:58.314778] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.880 [2024-04-15 02:04:58.314797] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.880 [2024-04-15 02:04:58.314811] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.880 [2024-04-15 02:04:58.314840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.880 qpair failed and we were unable to recover it. 00:30:12.880 [2024-04-15 02:04:58.324650] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.324919] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.324960] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.324975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.324987] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.325030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.334691] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.334936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.334963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.334978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.334994] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.335025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.344639] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.344852] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.344879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.344895] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.344907] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.344936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.354660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.354922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.354949] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.354965] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.354977] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.355006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.364699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.364910] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.364936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.364951] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.364964] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.364992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.374748] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.374954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.374981] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.374996] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.375008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.375058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.384740] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.384940] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.384968] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.384983] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.384996] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.385026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.394826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.395077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.395104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.395120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.395132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.395163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.404882] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.405115] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.405143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.405163] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.405176] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.405207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.414913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.415159] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.415186] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.415202] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.415214] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.415244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.424893] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.425101] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.425128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.425144] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.425156] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.425198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.434903] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.435113] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.435140] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.435155] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.435167] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.881 [2024-04-15 02:04:58.435196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.881 qpair failed and we were unable to recover it. 00:30:12.881 [2024-04-15 02:04:58.444944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.881 [2024-04-15 02:04:58.445163] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.881 [2024-04-15 02:04:58.445190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.881 [2024-04-15 02:04:58.445206] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.881 [2024-04-15 02:04:58.445218] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.445250] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.454972] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.455185] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.455213] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.455228] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.455240] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.455270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.465008] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.465223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.465251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.465269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.465282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.465312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.475025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.475253] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.475280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.475295] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.475307] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.475336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.485086] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.485321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.485357] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.485372] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.485384] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.485415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.495124] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.495322] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.495362] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.495377] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.495390] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.495419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.505150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.505396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.505423] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.505439] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.505451] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.505480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.515188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.515394] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.515422] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.515437] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.515452] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.515481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:12.882 [2024-04-15 02:04:58.525201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:12.882 [2024-04-15 02:04:58.525445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:12.882 [2024-04-15 02:04:58.525474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:12.882 [2024-04-15 02:04:58.525490] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:12.882 [2024-04-15 02:04:58.525517] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:12.882 [2024-04-15 02:04:58.525547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:12.882 qpair failed and we were unable to recover it. 00:30:13.141 [2024-04-15 02:04:58.535316] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.141 [2024-04-15 02:04:58.535521] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.141 [2024-04-15 02:04:58.535548] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.141 [2024-04-15 02:04:58.535563] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.141 [2024-04-15 02:04:58.535575] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.141 [2024-04-15 02:04:58.535610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.141 qpair failed and we were unable to recover it. 00:30:13.141 [2024-04-15 02:04:58.545294] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.141 [2024-04-15 02:04:58.545526] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.141 [2024-04-15 02:04:58.545553] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.141 [2024-04-15 02:04:58.545567] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.141 [2024-04-15 02:04:58.545579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.141 [2024-04-15 02:04:58.545609] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.141 qpair failed and we were unable to recover it. 00:30:13.141 [2024-04-15 02:04:58.555384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.141 [2024-04-15 02:04:58.555584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.141 [2024-04-15 02:04:58.555609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.141 [2024-04-15 02:04:58.555623] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.141 [2024-04-15 02:04:58.555636] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.141 [2024-04-15 02:04:58.555664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.141 qpair failed and we were unable to recover it. 00:30:13.141 [2024-04-15 02:04:58.565278] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.141 [2024-04-15 02:04:58.565482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.141 [2024-04-15 02:04:58.565509] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.141 [2024-04-15 02:04:58.565524] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.565537] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.565566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.575370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.575609] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.575635] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.575650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.575663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.575691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.585442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.585637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.585669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.585685] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.585698] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.585727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.595393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.595593] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.595619] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.595634] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.595646] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.595675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.605435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.605673] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.605700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.605715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.605742] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.605772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.615495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.615736] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.615762] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.615777] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.615789] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.615818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.625604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.625812] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.625838] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.625852] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.625865] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.625902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.635503] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.635714] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.635740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.635755] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.635768] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.635796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.645538] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.645738] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.645765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.645780] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.645792] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.645821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.655579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.655785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.655811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.655826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.655838] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.655866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.665633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.665882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.665909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.665924] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.665936] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.665965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.675634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.675840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.675866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.675881] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.675893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.675923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.685665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.685870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.685897] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.685912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.142 [2024-04-15 02:04:58.685924] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.142 [2024-04-15 02:04:58.685956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.142 qpair failed and we were unable to recover it. 00:30:13.142 [2024-04-15 02:04:58.695735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.142 [2024-04-15 02:04:58.695944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.142 [2024-04-15 02:04:58.695970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.142 [2024-04-15 02:04:58.695985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.695997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.696026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.705795] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.705995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.706022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.706036] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.706055] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.706086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.715837] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.716061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.716088] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.716103] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.716121] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.716151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.725803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.726100] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.726137] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.726162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.726185] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.726230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.735849] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.736052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.736082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.736097] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.736110] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.736140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.745870] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.746120] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.746148] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.746162] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.746175] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.746205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.755854] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.756064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.756090] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.756104] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.756116] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.756146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.765950] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.766164] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.766192] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.766207] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.766219] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.766248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.775913] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.776127] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.776154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.776169] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.776181] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.776211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.143 [2024-04-15 02:04:58.786053] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.143 [2024-04-15 02:04:58.786265] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.143 [2024-04-15 02:04:58.786292] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.143 [2024-04-15 02:04:58.786318] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.143 [2024-04-15 02:04:58.786330] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.143 [2024-04-15 02:04:58.786360] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.143 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.795959] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.796189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.402 [2024-04-15 02:04:58.796216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.402 [2024-04-15 02:04:58.796231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.402 [2024-04-15 02:04:58.796243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.402 [2024-04-15 02:04:58.796275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.402 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.806018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.806260] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.402 [2024-04-15 02:04:58.806287] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.402 [2024-04-15 02:04:58.806308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.402 [2024-04-15 02:04:58.806321] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.402 [2024-04-15 02:04:58.806351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.402 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.816022] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.816233] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.402 [2024-04-15 02:04:58.816260] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.402 [2024-04-15 02:04:58.816274] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.402 [2024-04-15 02:04:58.816287] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.402 [2024-04-15 02:04:58.816316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.402 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.826039] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.826250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.402 [2024-04-15 02:04:58.826277] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.402 [2024-04-15 02:04:58.826292] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.402 [2024-04-15 02:04:58.826304] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.402 [2024-04-15 02:04:58.826333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.402 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.836076] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.836324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.402 [2024-04-15 02:04:58.836350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.402 [2024-04-15 02:04:58.836366] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.402 [2024-04-15 02:04:58.836378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.402 [2024-04-15 02:04:58.836407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.402 qpair failed and we were unable to recover it. 00:30:13.402 [2024-04-15 02:04:58.846193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.402 [2024-04-15 02:04:58.846400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.846427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.846441] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.846453] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.846482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.856135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.856347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.856373] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.856387] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.856400] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.856428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.866179] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.866383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.866410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.866427] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.866440] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.866469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.876220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.876416] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.876443] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.876458] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.876471] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.876502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.886226] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.886419] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.886446] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.886461] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.886474] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.886504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.896261] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.896454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.896481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.896501] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.896514] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.896543] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.906357] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.906563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.906590] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.906605] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.906617] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.906659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.916362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.916555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.916582] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.916597] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.916609] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.916638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.926365] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.926562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.926588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.926603] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.926616] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.926646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.936382] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.936622] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.936649] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.936664] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.936677] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.936707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.946448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.946666] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.946692] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.946707] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.946721] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.946750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.956452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.956647] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.956672] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.956686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.956699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.956728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.966448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.966644] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.966669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.403 [2024-04-15 02:04:58.966684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.403 [2024-04-15 02:04:58.966697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.403 [2024-04-15 02:04:58.966727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.403 qpair failed and we were unable to recover it. 00:30:13.403 [2024-04-15 02:04:58.976589] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.403 [2024-04-15 02:04:58.976784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.403 [2024-04-15 02:04:58.976810] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:58.976824] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:58.976836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:58.976865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:58.986524] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:58.986777] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:58.986824] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:58.986853] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:58.986876] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:58.986923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:58.996603] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:58.996830] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:58.996858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:58.996873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:58.996885] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:58.996915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:59.006581] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:59.006781] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:59.006807] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:59.006822] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:59.006835] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:59.006865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:59.016635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:59.016833] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:59.016858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:59.016873] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:59.016886] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:59.016915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:59.026635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:59.026831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:59.026856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:59.026871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:59.026883] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:59.026918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:59.036652] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:59.036840] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:59.036865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:59.036880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:59.036893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:59.036935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.404 [2024-04-15 02:04:59.046722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.404 [2024-04-15 02:04:59.046952] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.404 [2024-04-15 02:04:59.046980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.404 [2024-04-15 02:04:59.046995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.404 [2024-04-15 02:04:59.047008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.404 [2024-04-15 02:04:59.047038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.404 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.056806] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.057063] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.057091] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.057106] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.057119] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.057149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.066817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.067033] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.067064] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.067080] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.067093] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.067123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.076791] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.076981] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.077011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.077027] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.077040] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.077078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.086931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.087131] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.087157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.087171] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.087184] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.087214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.096915] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.097142] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.097168] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.097182] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.097196] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.097226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.106931] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.107138] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.107163] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.107178] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.107191] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.107221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.116907] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.117209] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.117238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.117253] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.117266] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.117328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.126990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.127191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.127216] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.127231] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.663 [2024-04-15 02:04:59.127244] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.663 [2024-04-15 02:04:59.127275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.663 qpair failed and we were unable to recover it. 00:30:13.663 [2024-04-15 02:04:59.136983] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.663 [2024-04-15 02:04:59.137190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.663 [2024-04-15 02:04:59.137215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.663 [2024-04-15 02:04:59.137230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.137242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.137271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.147023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.147223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.147248] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.147262] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.147275] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.147318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.157060] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.157303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.157330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.157346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.157359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.157389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.167084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.167282] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.167323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.167339] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.167353] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.167382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.177115] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.177319] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.177344] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.177358] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.177371] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.177400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.187125] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.187363] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.187390] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.187406] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.187419] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.187448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.197171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.197409] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.197437] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.197452] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.197465] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.197494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.207206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.207449] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.207474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.207489] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.207508] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.207537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.217249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.217481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.217506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.217521] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.217534] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.217576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.227238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.227431] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.227465] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.227491] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.227512] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.227555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.237289] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.237479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.237506] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.237522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.237535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.237566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.247322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.247513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.247540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.247555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.247568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.247599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.257344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.257589] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.257617] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.257632] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.257644] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.257674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.267356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.664 [2024-04-15 02:04:59.267548] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.664 [2024-04-15 02:04:59.267575] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.664 [2024-04-15 02:04:59.267590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.664 [2024-04-15 02:04:59.267602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.664 [2024-04-15 02:04:59.267631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.664 qpair failed and we were unable to recover it. 00:30:13.664 [2024-04-15 02:04:59.277408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.665 [2024-04-15 02:04:59.277607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.665 [2024-04-15 02:04:59.277634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.665 [2024-04-15 02:04:59.277650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.665 [2024-04-15 02:04:59.277665] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.665 [2024-04-15 02:04:59.277695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.665 qpair failed and we were unable to recover it. 00:30:13.665 [2024-04-15 02:04:59.287421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.665 [2024-04-15 02:04:59.287611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.665 [2024-04-15 02:04:59.287653] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.665 [2024-04-15 02:04:59.287668] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.665 [2024-04-15 02:04:59.287680] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.665 [2024-04-15 02:04:59.287725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.665 qpair failed and we were unable to recover it. 00:30:13.665 [2024-04-15 02:04:59.297467] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.665 [2024-04-15 02:04:59.297668] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.665 [2024-04-15 02:04:59.297693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.665 [2024-04-15 02:04:59.297708] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.665 [2024-04-15 02:04:59.297728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.665 [2024-04-15 02:04:59.297760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.665 qpair failed and we were unable to recover it. 00:30:13.665 [2024-04-15 02:04:59.307478] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.665 [2024-04-15 02:04:59.307675] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.665 [2024-04-15 02:04:59.307701] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.665 [2024-04-15 02:04:59.307715] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.665 [2024-04-15 02:04:59.307728] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.665 [2024-04-15 02:04:59.307758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.665 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.317494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.317695] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.317723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.317738] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.317750] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.317780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.327537] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.327732] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.327760] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.327775] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.327788] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.327818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.337610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.337814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.337841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.337856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.337868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.337897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.347592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.347786] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.347813] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.347828] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.347841] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.347870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.357630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.357839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.357866] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.357882] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.357894] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.357924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.367701] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.367889] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.367914] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.367928] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.367941] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.367971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.377793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.924 [2024-04-15 02:04:59.377995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.924 [2024-04-15 02:04:59.378021] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.924 [2024-04-15 02:04:59.378035] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.924 [2024-04-15 02:04:59.378056] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.924 [2024-04-15 02:04:59.378088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.924 qpair failed and we were unable to recover it. 00:30:13.924 [2024-04-15 02:04:59.387726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.387927] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.387953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.387977] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.387990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.388020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.397735] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.397922] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.397948] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.397962] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.397975] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.398005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.407819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.408029] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.408061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.408077] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.408094] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.408124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.417944] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.418154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.418180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.418196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.418209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.418238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.427902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.428106] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.428132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.428147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.428159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.428189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.437858] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.438103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.438131] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.438147] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.438160] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.438202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.447887] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.448093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.448118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.448132] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.448145] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.448175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.458040] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.458250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.458274] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.458289] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.458302] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.458331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.467953] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.468155] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.468181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.468195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.468208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.468238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.478000] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.478222] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.478263] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.478287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.478310] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.478356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.488020] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.488224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.488254] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.488269] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.488282] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.488313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.498055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.498258] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.498285] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.498308] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.498320] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.498349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.508114] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.508310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.508337] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.508352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.925 [2024-04-15 02:04:59.508364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.925 [2024-04-15 02:04:59.508395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.925 qpair failed and we were unable to recover it. 00:30:13.925 [2024-04-15 02:04:59.518129] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.925 [2024-04-15 02:04:59.518323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.925 [2024-04-15 02:04:59.518350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.925 [2024-04-15 02:04:59.518365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.518378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.518407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-04-15 02:04:59.528134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.926 [2024-04-15 02:04:59.528323] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.926 [2024-04-15 02:04:59.528350] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.926 [2024-04-15 02:04:59.528365] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.528378] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.528408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-04-15 02:04:59.538201] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.926 [2024-04-15 02:04:59.538408] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.926 [2024-04-15 02:04:59.538434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.926 [2024-04-15 02:04:59.538449] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.538462] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.538491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-04-15 02:04:59.548245] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.926 [2024-04-15 02:04:59.548464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.926 [2024-04-15 02:04:59.548491] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.926 [2024-04-15 02:04:59.548506] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.548519] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.548548] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-04-15 02:04:59.558244] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.926 [2024-04-15 02:04:59.558436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.926 [2024-04-15 02:04:59.558461] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.926 [2024-04-15 02:04:59.558474] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.558487] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.558515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:13.926 [2024-04-15 02:04:59.568304] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:13.926 [2024-04-15 02:04:59.568541] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:13.926 [2024-04-15 02:04:59.568574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:13.926 [2024-04-15 02:04:59.568590] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:13.926 [2024-04-15 02:04:59.568602] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:13.926 [2024-04-15 02:04:59.568632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:13.926 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.578321] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.186 [2024-04-15 02:04:59.578599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.186 [2024-04-15 02:04:59.578626] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.186 [2024-04-15 02:04:59.578641] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.186 [2024-04-15 02:04:59.578654] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.186 [2024-04-15 02:04:59.578697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.186 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.588367] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.186 [2024-04-15 02:04:59.588579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.186 [2024-04-15 02:04:59.588609] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.186 [2024-04-15 02:04:59.588624] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.186 [2024-04-15 02:04:59.588638] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.186 [2024-04-15 02:04:59.588668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.186 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.598362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.186 [2024-04-15 02:04:59.598570] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.186 [2024-04-15 02:04:59.598597] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.186 [2024-04-15 02:04:59.598612] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.186 [2024-04-15 02:04:59.598624] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.186 [2024-04-15 02:04:59.598653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.186 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.608422] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.186 [2024-04-15 02:04:59.608623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.186 [2024-04-15 02:04:59.608667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.186 [2024-04-15 02:04:59.608684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.186 [2024-04-15 02:04:59.608696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.186 [2024-04-15 02:04:59.608731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.186 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.618435] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.186 [2024-04-15 02:04:59.618655] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.186 [2024-04-15 02:04:59.618683] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.186 [2024-04-15 02:04:59.618699] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.186 [2024-04-15 02:04:59.618711] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.186 [2024-04-15 02:04:59.618747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.186 qpair failed and we were unable to recover it. 00:30:14.186 [2024-04-15 02:04:59.628448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.628645] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.628671] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.628684] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.628697] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.628726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.638521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.638761] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.638788] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.638804] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.638817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.638847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.648522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.648715] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.648740] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.648754] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.648767] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.648797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.658568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.658771] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.658801] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.658816] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.658830] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.658859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.668556] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.668750] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.668775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.668789] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.668801] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.668831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.678574] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.678769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.678793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.678807] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.678821] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.678850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.688606] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.688798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.688823] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.688836] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.688850] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.688879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.698682] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.698881] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.698905] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.698919] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.698938] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.698968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.708721] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.708925] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.708950] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.708964] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.708978] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.709007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.718754] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.718974] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.719002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.719020] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.719034] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.719070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.728818] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.729009] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.729034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.729056] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.729070] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.729100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.738822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.739031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.739066] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.739083] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.739096] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.739127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.748821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.749025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.749057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.749073] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.749086] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.187 [2024-04-15 02:04:59.749116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.187 qpair failed and we were unable to recover it. 00:30:14.187 [2024-04-15 02:04:59.758840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.187 [2024-04-15 02:04:59.759030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.187 [2024-04-15 02:04:59.759063] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.187 [2024-04-15 02:04:59.759079] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.187 [2024-04-15 02:04:59.759091] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.759121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.768898] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.769139] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.769169] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.769184] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.769198] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.769229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.778896] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.779151] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.779180] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.779195] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.779208] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.779238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.788909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.789114] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.789139] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.789153] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.789172] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.789202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.798974] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.799171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.799196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.799210] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.799224] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.799253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.809017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.809232] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.809261] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.809280] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.809294] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.809324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.819005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.819211] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.819238] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.819252] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.819265] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.819295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.188 [2024-04-15 02:04:59.829042] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.188 [2024-04-15 02:04:59.829250] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.188 [2024-04-15 02:04:59.829276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.188 [2024-04-15 02:04:59.829290] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.188 [2024-04-15 02:04:59.829303] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.188 [2024-04-15 02:04:59.829333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.188 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.839102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.839304] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.839331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.839346] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.839359] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.839389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.849121] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.849320] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.849360] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.849374] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.849387] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.849431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.859176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.859417] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.859445] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.859459] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.859473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.859502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.869155] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.869368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.869406] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.869421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.869435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.869465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.879193] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.879383] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.879410] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.879433] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.879457] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.879487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.889234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.889429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.889469] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.889483] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.889495] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.889538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.899267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.899470] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.899497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.899512] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.899525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.899554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.909277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.909480] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.909505] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.909519] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.909532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.909561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.919358] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.919588] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.919616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.919631] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.919645] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.919686] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.929364] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.929556] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.929581] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.929595] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.929608] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.929638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.939362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.448 [2024-04-15 02:04:59.939568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.448 [2024-04-15 02:04:59.939595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.448 [2024-04-15 02:04:59.939610] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.448 [2024-04-15 02:04:59.939622] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.448 [2024-04-15 02:04:59.939652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.448 qpair failed and we were unable to recover it. 00:30:14.448 [2024-04-15 02:04:59.949406] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.949600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.949627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.949642] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.949655] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.949684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:04:59.959461] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.959660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.959687] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.959701] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.959714] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.959742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:04:59.969448] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.969648] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.969674] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.969695] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.969708] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.969738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:04:59.979583] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.979785] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.979812] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.979827] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.979839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.979869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:04:59.989495] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.989684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.989713] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.989728] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.989741] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.989771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:04:59.999552] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:04:59.999748] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:04:59.999775] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:04:59.999790] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:04:59.999803] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:04:59.999832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.009609] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.009821] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.009849] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.009865] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.009878] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.009908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.019663] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.019866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.019895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.019911] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.019923] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.019953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.029627] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.029829] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.029856] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.029871] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.029884] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.029913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.039672] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.039882] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.039911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.039926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.039939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.039969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.049702] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.049905] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.049933] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.049948] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.049960] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.049990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.059717] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.059918] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.059951] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.059967] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.059980] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.060009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.069731] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.069944] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.069970] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.069985] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.069997] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.070027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.079743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.079937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.079963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.079978] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.079990] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.449 [2024-04-15 02:05:00.080020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.449 qpair failed and we were unable to recover it. 00:30:14.449 [2024-04-15 02:05:00.089860] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.449 [2024-04-15 02:05:00.090077] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.449 [2024-04-15 02:05:00.090105] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.449 [2024-04-15 02:05:00.090120] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.449 [2024-04-15 02:05:00.090132] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.450 [2024-04-15 02:05:00.090162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.450 qpair failed and we were unable to recover it. 00:30:14.708 [2024-04-15 02:05:00.099876] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.708 [2024-04-15 02:05:00.100078] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.708 [2024-04-15 02:05:00.100104] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.708 [2024-04-15 02:05:00.100119] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.708 [2024-04-15 02:05:00.100131] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.708 [2024-04-15 02:05:00.100169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.708 qpair failed and we were unable to recover it. 00:30:14.708 [2024-04-15 02:05:00.109884] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.110096] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.110123] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.110137] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.110149] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.110179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.119932] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.120143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.120171] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.120190] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.120203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.120235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.129902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.130092] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.130119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.130134] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.130146] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.130176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.139938] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.140149] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.140176] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.140191] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.140203] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.140231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.149952] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.150147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.150179] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.150194] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.150206] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.150235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.160016] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.160257] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.160284] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.160299] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.160311] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.160340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.170119] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.170318] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.170346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.170379] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.170392] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.170436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.180059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.180262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.180289] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.180304] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.180317] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.180346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.190077] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.190275] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.190301] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.190316] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.190329] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.190364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.200102] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.200301] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.200328] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.200342] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.200355] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.200385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.210154] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.210353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.210380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.210395] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.210407] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.210438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.220206] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.220412] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.220439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.220454] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.220466] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.220495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.230238] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.230505] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.230544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.709 [2024-04-15 02:05:00.230568] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.709 [2024-04-15 02:05:00.230591] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.709 [2024-04-15 02:05:00.230637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.709 qpair failed and we were unable to recover it. 00:30:14.709 [2024-04-15 02:05:00.240258] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.709 [2024-04-15 02:05:00.240474] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.709 [2024-04-15 02:05:00.240503] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.240518] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.240531] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.240565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.250272] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.250476] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.250504] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.250520] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.250532] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.250562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.260302] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.260500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.260527] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.260543] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.260556] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.260587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.270314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.270507] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.270533] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.270548] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.270561] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.270591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.280379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.280568] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.280595] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.280611] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.280630] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.280660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.290409] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.290607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.290634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.290650] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.290663] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.290708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.300423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.300623] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.300650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.300665] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.300678] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.300709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.310433] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.310635] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.310662] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.310677] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.310690] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.310719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.320460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.320650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.320678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.320694] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.320706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.320737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.330477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.330676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.330703] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.330718] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.330731] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.330761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.340615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.340814] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.340841] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.340856] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.340868] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.340898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.710 [2024-04-15 02:05:00.350613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.710 [2024-04-15 02:05:00.350835] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.710 [2024-04-15 02:05:00.350862] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.710 [2024-04-15 02:05:00.350877] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.710 [2024-04-15 02:05:00.350890] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.710 [2024-04-15 02:05:00.350919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.710 qpair failed and we were unable to recover it. 00:30:14.969 [2024-04-15 02:05:00.360579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.360857] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.360884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.360899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.360926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.360955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.370633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.370826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.370867] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.370888] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.370900] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.370945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.380656] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.380856] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.380884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.380899] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.380912] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.380944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.390713] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.390907] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.390935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.390950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.390962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.390992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.400679] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.400866] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.400894] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.400908] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.400921] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.400951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.410759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.411001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.411042] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.411070] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.411084] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.411115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.420792] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.421024] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.421057] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.421074] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.421087] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.421117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.430799] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.431001] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.431029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.431044] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.431066] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.431095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.440840] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.441081] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.441108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.441124] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.441136] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.441165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.450846] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.451071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.451099] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.451114] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.451127] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.451169] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.460975] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.461182] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.461209] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.461230] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.461243] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.461273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.470955] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.471172] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.471199] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.471215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.471228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.471259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.480941] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.970 [2024-04-15 02:05:00.481145] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.970 [2024-04-15 02:05:00.481173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.970 [2024-04-15 02:05:00.481188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.970 [2024-04-15 02:05:00.481201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.970 [2024-04-15 02:05:00.481231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.970 qpair failed and we were unable to recover it. 00:30:14.970 [2024-04-15 02:05:00.491001] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.491242] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.491270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.491285] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.491297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.491336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.500987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.501203] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.501230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.501245] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.501257] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.501287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.511055] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.511285] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.511312] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.511327] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.511340] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.511369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.521056] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.521269] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.521296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.521311] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.521332] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.521362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.531087] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.531341] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.531368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.531383] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.531396] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.531426] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.541186] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.541402] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.541428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.541447] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.541460] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.541490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.551150] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.551357] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.551388] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.551403] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.551416] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.551446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.561194] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.561429] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.561454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.561471] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.561485] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.561514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.571187] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.571401] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.571428] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.571443] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.571455] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.571485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.581235] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.581485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.581512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.581527] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.581540] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.581568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.591249] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.591452] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.591479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.591494] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.591506] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.591541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.601266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.601466] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.601493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.601508] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.971 [2024-04-15 02:05:00.601520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.971 [2024-04-15 02:05:00.601549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.971 qpair failed and we were unable to recover it. 00:30:14.971 [2024-04-15 02:05:00.611329] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:14.971 [2024-04-15 02:05:00.611525] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:14.971 [2024-04-15 02:05:00.611551] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:14.971 [2024-04-15 02:05:00.611566] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:14.972 [2024-04-15 02:05:00.611579] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:14.972 [2024-04-15 02:05:00.611608] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:14.972 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.621507] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.621711] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.621738] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.621753] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.621766] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.621794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.631384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.631630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.631658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.631673] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.631686] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.631717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.641402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.641638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.641670] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.641686] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.641699] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.641740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.651442] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.651650] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.651677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.651692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.651705] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.651747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.661462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.661667] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.661694] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.661709] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.661722] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.661751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.671494] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.671723] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.671757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.671772] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.671785] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.671825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.681633] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.681841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.681870] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.681886] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.681899] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.681934] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.691553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.691762] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.691789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.691805] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.691817] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.691849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.701586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.701784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.701811] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.701826] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.701839] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.701868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.711684] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.711884] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.711911] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.711926] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.711939] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.231 [2024-04-15 02:05:00.711969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.231 qpair failed and we were unable to recover it. 00:30:15.231 [2024-04-15 02:05:00.721624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.231 [2024-04-15 02:05:00.721826] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.231 [2024-04-15 02:05:00.721853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.231 [2024-04-15 02:05:00.721867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.231 [2024-04-15 02:05:00.721880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.721909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.731665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.731868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.731909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.731925] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.731937] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.731967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.741715] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.741916] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.741942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.741957] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.741969] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.741999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.751722] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.751912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.751938] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.751953] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.751965] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.751994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.761768] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.761977] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.762004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.762019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.762041] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.762093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.771863] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.772073] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.772100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.772115] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.772133] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.772165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.781935] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.782143] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.782170] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.782185] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.782197] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.782227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.791835] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.792031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.792068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.792084] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.792097] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.792127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.801964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.802170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.802198] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.802213] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.802225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.802254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.811921] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.812123] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.812150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.812166] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.812178] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.812208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.821940] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.822154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.822181] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.822196] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.822209] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.822238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.831949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.832146] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.832173] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.832188] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.832201] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.832232] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.841985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.842190] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.842215] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.842229] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.842242] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.232 [2024-04-15 02:05:00.842272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.232 qpair failed and we were unable to recover it. 00:30:15.232 [2024-04-15 02:05:00.852005] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.232 [2024-04-15 02:05:00.852223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.232 [2024-04-15 02:05:00.852251] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.232 [2024-04-15 02:05:00.852265] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.232 [2024-04-15 02:05:00.852279] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.233 [2024-04-15 02:05:00.852309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-04-15 02:05:00.862041] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.233 [2024-04-15 02:05:00.862248] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.233 [2024-04-15 02:05:00.862273] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.233 [2024-04-15 02:05:00.862287] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.233 [2024-04-15 02:05:00.862306] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.233 [2024-04-15 02:05:00.862336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.233 [2024-04-15 02:05:00.872224] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.233 [2024-04-15 02:05:00.872479] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.233 [2024-04-15 02:05:00.872507] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.233 [2024-04-15 02:05:00.872522] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.233 [2024-04-15 02:05:00.872535] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.233 [2024-04-15 02:05:00.872579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.233 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.882105] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.882306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.882330] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.882345] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.882358] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.882387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.892171] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.892368] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.892397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.892413] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.892427] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.892456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.902173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.902374] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.902401] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.902415] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.902439] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.902468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.912219] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.912500] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.912529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.912560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.912574] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.912603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.922362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.922561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.922587] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.922601] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.922613] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.922643] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.932407] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.932639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.932667] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.932683] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.932696] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.932740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.942318] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.942520] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.942546] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.942560] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.942573] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.942603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.952305] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.952503] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.952529] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.952549] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.952564] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.952593] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.962369] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.962574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.962599] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.962614] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.492 [2024-04-15 02:05:00.962626] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.492 [2024-04-15 02:05:00.962657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.492 qpair failed and we were unable to recover it. 00:30:15.492 [2024-04-15 02:05:00.972414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.492 [2024-04-15 02:05:00.972607] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.492 [2024-04-15 02:05:00.972633] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.492 [2024-04-15 02:05:00.972663] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:00.972676] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:00.972720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:00.982520] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:00.982775] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:00.982800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:00.982814] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:00.982827] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:00.982857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:00.992462] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:00.992654] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:00.992680] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:00.992693] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:00.992706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:00.992735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.002500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.002696] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.002722] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.002736] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.002749] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.002778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.012498] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.012685] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.012710] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.012725] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.012738] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.012781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.022584] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.022796] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.022821] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.022835] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.022849] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.022879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.032621] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.032846] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.032874] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.032889] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.032903] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.032932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.042607] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.042842] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.042875] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.042892] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.042906] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.042935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.052635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.052868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.052896] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.052912] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.052926] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.052958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.062712] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.062911] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.062936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.062950] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.062962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.062993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.072685] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.072877] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.072902] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.072916] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.072929] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.072959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.082699] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.082893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.082918] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.082933] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.082945] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.082974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.092765] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.092962] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.092987] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.493 [2024-04-15 02:05:01.093001] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.493 [2024-04-15 02:05:01.093014] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.493 [2024-04-15 02:05:01.093044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.493 qpair failed and we were unable to recover it. 00:30:15.493 [2024-04-15 02:05:01.102780] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.493 [2024-04-15 02:05:01.102976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.493 [2024-04-15 02:05:01.103001] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.494 [2024-04-15 02:05:01.103016] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.494 [2024-04-15 02:05:01.103029] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.494 [2024-04-15 02:05:01.103067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-04-15 02:05:01.112811] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.494 [2024-04-15 02:05:01.113023] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.494 [2024-04-15 02:05:01.113055] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.494 [2024-04-15 02:05:01.113072] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.494 [2024-04-15 02:05:01.113085] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.494 [2024-04-15 02:05:01.113114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-04-15 02:05:01.122908] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.494 [2024-04-15 02:05:01.123107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.494 [2024-04-15 02:05:01.123132] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.494 [2024-04-15 02:05:01.123146] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.494 [2024-04-15 02:05:01.123159] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.494 [2024-04-15 02:05:01.123188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.494 [2024-04-15 02:05:01.132866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.494 [2024-04-15 02:05:01.133111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.494 [2024-04-15 02:05:01.133143] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.494 [2024-04-15 02:05:01.133159] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.494 [2024-04-15 02:05:01.133173] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.494 [2024-04-15 02:05:01.133203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.494 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.142948] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.143147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.143172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.143186] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.143199] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.143228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.152964] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.153175] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.153201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.153215] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.153228] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.153257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.162979] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.163171] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.163196] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.163211] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.163225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.163254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.173018] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.173252] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.173280] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.173296] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.173309] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.173359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.183075] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.183274] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.183299] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.183313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.183326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.183357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.193108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.193313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.193338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.193352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.193365] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.193395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.203143] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.203344] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.203368] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.203382] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.203395] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.203424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.213183] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.213382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.213407] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.213421] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.213435] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.213466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.223234] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.223462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.223495] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.753 [2024-04-15 02:05:01.223511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.753 [2024-04-15 02:05:01.223524] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.753 [2024-04-15 02:05:01.223554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.753 qpair failed and we were unable to recover it. 00:30:15.753 [2024-04-15 02:05:01.233191] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.753 [2024-04-15 02:05:01.233390] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.753 [2024-04-15 02:05:01.233415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.233429] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.233441] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.233470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.243251] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.243447] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.243472] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.243486] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.243498] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.243528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.253266] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.253468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.253493] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.253507] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.253520] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.253550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.263314] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.263557] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.263585] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.263600] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.263619] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.263649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.273384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.273637] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.273665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.273681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.273694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.273738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.283352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.283551] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.283577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.283591] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.283604] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.283634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.293363] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.293563] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.293589] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.293604] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.293618] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.293648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.303393] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.303638] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.303665] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.303681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.303694] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.303724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.313443] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.313652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.313678] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.313692] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.313706] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.313747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.323459] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.323658] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.323682] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.323696] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.323709] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.323739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.333644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.333865] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.333890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.333904] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.333917] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.333960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.343669] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.343870] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.343895] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.343909] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.343922] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.343964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.353568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.353800] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.353828] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.353843] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.353862] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.754 [2024-04-15 02:05:01.353904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.754 qpair failed and we were unable to recover it. 00:30:15.754 [2024-04-15 02:05:01.363629] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.754 [2024-04-15 02:05:01.363839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.754 [2024-04-15 02:05:01.363865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.754 [2024-04-15 02:05:01.363880] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.754 [2024-04-15 02:05:01.363893] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.755 [2024-04-15 02:05:01.363922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.755 qpair failed and we were unable to recover it. 00:30:15.755 [2024-04-15 02:05:01.373630] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.755 [2024-04-15 02:05:01.373828] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.755 [2024-04-15 02:05:01.373853] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.755 [2024-04-15 02:05:01.373867] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.755 [2024-04-15 02:05:01.373880] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.755 [2024-04-15 02:05:01.373910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.755 qpair failed and we were unable to recover it. 00:30:15.755 [2024-04-15 02:05:01.383675] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.755 [2024-04-15 02:05:01.383883] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.755 [2024-04-15 02:05:01.383908] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.755 [2024-04-15 02:05:01.383923] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.755 [2024-04-15 02:05:01.383940] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.755 [2024-04-15 02:05:01.383970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.755 qpair failed and we were unable to recover it. 00:30:15.755 [2024-04-15 02:05:01.393660] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:15.755 [2024-04-15 02:05:01.393875] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:15.755 [2024-04-15 02:05:01.393900] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:15.755 [2024-04-15 02:05:01.393914] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:15.755 [2024-04-15 02:05:01.393927] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:15.755 [2024-04-15 02:05:01.393957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:15.755 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.403709] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.403909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.403935] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.403949] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.403962] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.403992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.413759] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.413955] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.413980] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.413995] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.414008] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.414037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.423817] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.424054] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.424083] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.424100] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.424113] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.424144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.433779] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.433978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.434004] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.434019] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.434032] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.434069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.443866] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.444093] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.444119] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.444139] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.444152] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.444182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.453850] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.454052] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.454079] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.454094] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.454106] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.454138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.463925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.464173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.464201] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.464217] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.464230] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.464261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.473994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.474244] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.474272] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.474288] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.474301] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.014 [2024-04-15 02:05:01.474331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.014 qpair failed and we were unable to recover it. 00:30:16.014 [2024-04-15 02:05:01.483999] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.014 [2024-04-15 02:05:01.484199] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.014 [2024-04-15 02:05:01.484224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.014 [2024-04-15 02:05:01.484239] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.014 [2024-04-15 02:05:01.484252] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.484281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.494034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.494241] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.494268] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.494284] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.494297] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.494327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.504027] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.504278] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.504306] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.504321] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.504334] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.504364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.514021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.514218] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.514243] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.514258] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.514271] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.514301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.524078] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.524273] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.524298] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.524313] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.524326] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.524356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.534073] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.534313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.534340] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.534362] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.534375] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.534405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.544098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.544311] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.544338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.544352] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.544364] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.544394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.554135] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.554403] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.554431] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.554446] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.554473] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.554503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.564159] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.564352] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.564377] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.564391] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.564403] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.564433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.574277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.574473] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.574497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.574511] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.574525] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.574555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.584247] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.584448] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.584474] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.584488] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.584501] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.584529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.594301] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.594509] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.594536] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.594555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.594568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.594598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.604311] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.604532] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.604560] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.604575] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.604588] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.015 [2024-04-15 02:05:01.604618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.015 qpair failed and we were unable to recover it. 00:30:16.015 [2024-04-15 02:05:01.614309] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.015 [2024-04-15 02:05:01.614513] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.015 [2024-04-15 02:05:01.614540] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.015 [2024-04-15 02:05:01.614555] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.015 [2024-04-15 02:05:01.614568] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.016 [2024-04-15 02:05:01.614598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.016 qpair failed and we were unable to recover it. 00:30:16.016 [2024-04-15 02:05:01.624402] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.016 [2024-04-15 02:05:01.624629] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.016 [2024-04-15 02:05:01.624663] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.016 [2024-04-15 02:05:01.624679] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.016 [2024-04-15 02:05:01.624692] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.016 [2024-04-15 02:05:01.624722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.016 qpair failed and we were unable to recover it. 00:30:16.016 [2024-04-15 02:05:01.634421] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.016 [2024-04-15 02:05:01.634639] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.016 [2024-04-15 02:05:01.634666] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.016 [2024-04-15 02:05:01.634681] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.016 [2024-04-15 02:05:01.634693] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.016 [2024-04-15 02:05:01.634723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.016 qpair failed and we were unable to recover it. 00:30:16.016 [2024-04-15 02:05:01.644415] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.016 [2024-04-15 02:05:01.644611] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.016 [2024-04-15 02:05:01.644638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.016 [2024-04-15 02:05:01.644653] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.016 [2024-04-15 02:05:01.644666] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.016 [2024-04-15 02:05:01.644696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.016 qpair failed and we were unable to recover it. 00:30:16.016 [2024-04-15 02:05:01.654440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.016 [2024-04-15 02:05:01.654683] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.016 [2024-04-15 02:05:01.654711] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.016 [2024-04-15 02:05:01.654726] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.016 [2024-04-15 02:05:01.654739] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.016 [2024-04-15 02:05:01.654769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.016 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.664605] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.664836] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.664863] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.664878] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.664891] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.664938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.674491] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.674737] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.674765] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.674781] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.674794] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.674839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.684530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.684728] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.684756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.684771] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.684783] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.684813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.694586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.694780] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.694805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.694820] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.694834] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.694864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.704586] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.704784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.704809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.704823] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.704836] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.704866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.714710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.714912] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.714942] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.714958] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.714971] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.715001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.724624] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.724822] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.724847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.724861] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.724874] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.724903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.734729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.734963] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.734988] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.735002] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.735015] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.735051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.744734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.744936] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.744961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.744975] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.744988] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.745017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.754778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.754975] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.755002] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.755017] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.755035] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.755073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.764771] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.764968] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.764994] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.765009] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.765022] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f50fc000b90 00:30:16.275 [2024-04-15 02:05:01.765058] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:16.275 qpair failed and we were unable to recover it. 00:30:16.275 [2024-04-15 02:05:01.774815] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.275 [2024-04-15 02:05:01.775061] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.275 [2024-04-15 02:05:01.775093] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.275 [2024-04-15 02:05:01.775110] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.275 [2024-04-15 02:05:01.775123] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15e1610 00:30:16.276 [2024-04-15 02:05:01.775153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-04-15 02:05:01.784925] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:16.276 [2024-04-15 02:05:01.785184] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:16.276 [2024-04-15 02:05:01.785212] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:16.276 [2024-04-15 02:05:01.785227] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:16.276 [2024-04-15 02:05:01.785239] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x15e1610 00:30:16.276 [2024-04-15 02:05:01.785268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:16.276 qpair failed and we were unable to recover it. 00:30:16.276 [2024-04-15 02:05:01.785396] nvme_ctrlr.c:4325:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:30:16.276 A controller has encountered a failure and is being reset. 00:30:16.276 Controller properly reset. 00:30:16.276 Initializing NVMe Controllers 00:30:16.276 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.276 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:16.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:16.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:16.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:16.276 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:16.276 Initialization complete. Launching workers. 00:30:16.276 Starting thread on core 1 00:30:16.276 Starting thread on core 2 00:30:16.276 Starting thread on core 3 00:30:16.276 Starting thread on core 0 00:30:16.276 02:05:01 -- host/target_disconnect.sh@59 -- # sync 00:30:16.276 00:30:16.276 real 0m11.448s 00:30:16.276 user 0m19.311s 00:30:16.276 sys 0m5.547s 00:30:16.276 02:05:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:16.276 02:05:01 -- common/autotest_common.sh@10 -- # set +x 00:30:16.276 ************************************ 00:30:16.276 END TEST nvmf_target_disconnect_tc2 00:30:16.276 ************************************ 00:30:16.276 02:05:01 -- host/target_disconnect.sh@80 -- # '[' -n '' ']' 00:30:16.276 02:05:01 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:30:16.276 02:05:01 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:30:16.276 02:05:01 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:16.276 02:05:01 -- nvmf/common.sh@116 -- # sync 00:30:16.276 02:05:01 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:16.276 02:05:01 -- nvmf/common.sh@119 -- # set +e 00:30:16.276 02:05:01 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:16.276 02:05:01 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:16.276 rmmod nvme_tcp 00:30:16.276 rmmod nvme_fabrics 00:30:16.276 rmmod nvme_keyring 00:30:16.276 02:05:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:16.276 02:05:01 -- nvmf/common.sh@123 -- # set -e 00:30:16.276 02:05:01 -- nvmf/common.sh@124 -- # return 0 00:30:16.276 02:05:01 -- nvmf/common.sh@477 -- # '[' -n 2288701 ']' 00:30:16.276 02:05:01 -- nvmf/common.sh@478 -- # killprocess 2288701 00:30:16.276 02:05:01 -- common/autotest_common.sh@926 -- # '[' -z 2288701 ']' 00:30:16.276 02:05:01 -- common/autotest_common.sh@930 -- # kill -0 2288701 00:30:16.276 02:05:01 -- common/autotest_common.sh@931 -- # uname 00:30:16.276 02:05:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:16.534 02:05:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2288701 00:30:16.534 02:05:01 -- common/autotest_common.sh@932 -- # process_name=reactor_4 00:30:16.534 02:05:01 -- common/autotest_common.sh@936 -- # '[' reactor_4 = sudo ']' 00:30:16.534 02:05:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2288701' 00:30:16.534 killing process with pid 2288701 00:30:16.534 02:05:01 -- common/autotest_common.sh@945 -- # kill 2288701 00:30:16.534 02:05:01 -- common/autotest_common.sh@950 -- # wait 2288701 00:30:16.534 02:05:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:16.534 02:05:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:16.534 02:05:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:16.534 02:05:02 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:16.534 02:05:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:16.534 02:05:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.534 02:05:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:16.534 02:05:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:19.068 02:05:04 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:19.068 00:30:19.068 real 0m16.019s 00:30:19.068 user 0m45.212s 00:30:19.068 sys 0m7.516s 00:30:19.068 02:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.068 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.068 ************************************ 00:30:19.068 END TEST nvmf_target_disconnect 00:30:19.068 ************************************ 00:30:19.068 02:05:04 -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:19.068 02:05:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:19.068 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.068 02:05:04 -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:19.068 00:30:19.068 real 22m17.533s 00:30:19.068 user 63m15.617s 00:30:19.068 sys 5m21.797s 00:30:19.068 02:05:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:19.068 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.068 ************************************ 00:30:19.068 END TEST nvmf_tcp 00:30:19.068 ************************************ 00:30:19.068 02:05:04 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:30:19.068 02:05:04 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.068 02:05:04 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:19.068 02:05:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:19.068 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.068 ************************************ 00:30:19.068 START TEST spdkcli_nvmf_tcp 00:30:19.068 ************************************ 00:30:19.068 02:05:04 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:19.068 * Looking for test storage... 00:30:19.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:19.068 02:05:04 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:19.068 02:05:04 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:19.068 02:05:04 -- nvmf/common.sh@7 -- # uname -s 00:30:19.068 02:05:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:19.068 02:05:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:19.068 02:05:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:19.068 02:05:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:19.068 02:05:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:19.068 02:05:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:19.068 02:05:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:19.068 02:05:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:19.068 02:05:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:19.068 02:05:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:19.068 02:05:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.068 02:05:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:19.068 02:05:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:19.068 02:05:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:19.068 02:05:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:19.068 02:05:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:19.068 02:05:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:19.068 02:05:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:19.068 02:05:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:19.068 02:05:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.068 02:05:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.068 02:05:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.068 02:05:04 -- paths/export.sh@5 -- # export PATH 00:30:19.068 02:05:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:19.068 02:05:04 -- nvmf/common.sh@46 -- # : 0 00:30:19.068 02:05:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:19.068 02:05:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:19.068 02:05:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:19.068 02:05:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:19.068 02:05:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:19.068 02:05:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:19.068 02:05:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:19.068 02:05:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:19.068 02:05:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:19.068 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.068 02:05:04 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:19.068 02:05:04 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2289932 00:30:19.068 02:05:04 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:19.068 02:05:04 -- spdkcli/common.sh@34 -- # waitforlisten 2289932 00:30:19.069 02:05:04 -- common/autotest_common.sh@819 -- # '[' -z 2289932 ']' 00:30:19.069 02:05:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:19.069 02:05:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:19.069 02:05:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:19.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:19.069 02:05:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:19.069 02:05:04 -- common/autotest_common.sh@10 -- # set +x 00:30:19.069 [2024-04-15 02:05:04.377956] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:19.069 [2024-04-15 02:05:04.378056] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2289932 ] 00:30:19.069 EAL: No free 2048 kB hugepages reported on node 1 00:30:19.069 [2024-04-15 02:05:04.434672] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:19.069 [2024-04-15 02:05:04.517145] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:19.069 [2024-04-15 02:05:04.517352] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:19.069 [2024-04-15 02:05:04.517357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:20.000 02:05:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:20.000 02:05:05 -- common/autotest_common.sh@852 -- # return 0 00:30:20.000 02:05:05 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:20.000 02:05:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:20.000 02:05:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.000 02:05:05 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:20.000 02:05:05 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:20.000 02:05:05 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:20.000 02:05:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:20.000 02:05:05 -- common/autotest_common.sh@10 -- # set +x 00:30:20.000 02:05:05 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:20.000 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:20.000 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:20.000 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:20.000 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:20.000 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:20.001 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:20.001 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.001 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.001 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:20.001 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:20.001 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:20.001 ' 00:30:20.258 [2024-04-15 02:05:05.733710] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:22.784 [2024-04-15 02:05:07.886474] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.723 [2024-04-15 02:05:09.114929] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:26.252 [2024-04-15 02:05:11.402335] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:28.149 [2024-04-15 02:05:13.380715] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:29.522 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:29.522 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.522 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.522 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:29.523 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:29.523 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:29.523 02:05:14 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:29.523 02:05:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:29.523 02:05:14 -- common/autotest_common.sh@10 -- # set +x 00:30:29.523 02:05:15 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:29.523 02:05:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:29.523 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:30:29.523 02:05:15 -- spdkcli/nvmf.sh@69 -- # check_match 00:30:29.523 02:05:15 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:29.781 02:05:15 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:30.038 02:05:15 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:30.038 02:05:15 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:30.038 02:05:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:30.038 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:30:30.038 02:05:15 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:30.038 02:05:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:30.038 02:05:15 -- common/autotest_common.sh@10 -- # set +x 00:30:30.038 02:05:15 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:30.038 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:30.038 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.038 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:30.038 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:30.038 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:30.038 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:30.038 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:30.038 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:30.038 ' 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:35.302 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:35.302 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:35.302 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:35.302 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:35.302 02:05:20 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:35.302 02:05:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:35.302 02:05:20 -- common/autotest_common.sh@10 -- # set +x 00:30:35.302 02:05:20 -- spdkcli/nvmf.sh@90 -- # killprocess 2289932 00:30:35.302 02:05:20 -- common/autotest_common.sh@926 -- # '[' -z 2289932 ']' 00:30:35.302 02:05:20 -- common/autotest_common.sh@930 -- # kill -0 2289932 00:30:35.302 02:05:20 -- common/autotest_common.sh@931 -- # uname 00:30:35.302 02:05:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:35.302 02:05:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2289932 00:30:35.302 02:05:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:35.302 02:05:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:35.302 02:05:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2289932' 00:30:35.302 killing process with pid 2289932 00:30:35.302 02:05:20 -- common/autotest_common.sh@945 -- # kill 2289932 00:30:35.302 [2024-04-15 02:05:20.745280] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:35.302 02:05:20 -- common/autotest_common.sh@950 -- # wait 2289932 00:30:35.561 02:05:20 -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:35.561 02:05:20 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:35.561 02:05:20 -- spdkcli/common.sh@13 -- # '[' -n 2289932 ']' 00:30:35.561 02:05:20 -- spdkcli/common.sh@14 -- # killprocess 2289932 00:30:35.561 02:05:20 -- common/autotest_common.sh@926 -- # '[' -z 2289932 ']' 00:30:35.561 02:05:20 -- common/autotest_common.sh@930 -- # kill -0 2289932 00:30:35.561 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2289932) - No such process 00:30:35.561 02:05:20 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2289932 is not found' 00:30:35.561 Process with pid 2289932 is not found 00:30:35.561 02:05:20 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:35.561 02:05:20 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:35.561 02:05:20 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:35.561 00:30:35.561 real 0m16.680s 00:30:35.561 user 0m35.406s 00:30:35.561 sys 0m0.832s 00:30:35.561 02:05:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.561 02:05:20 -- common/autotest_common.sh@10 -- # set +x 00:30:35.561 ************************************ 00:30:35.561 END TEST spdkcli_nvmf_tcp 00:30:35.561 ************************************ 00:30:35.561 02:05:20 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.561 02:05:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:30:35.561 02:05:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:35.561 02:05:20 -- common/autotest_common.sh@10 -- # set +x 00:30:35.561 ************************************ 00:30:35.561 START TEST nvmf_identify_passthru 00:30:35.561 ************************************ 00:30:35.561 02:05:20 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:35.561 * Looking for test storage... 00:30:35.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:35.561 02:05:21 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.561 02:05:21 -- nvmf/common.sh@7 -- # uname -s 00:30:35.561 02:05:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.561 02:05:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.561 02:05:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.561 02:05:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.561 02:05:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.561 02:05:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.561 02:05:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.561 02:05:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.561 02:05:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.561 02:05:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.561 02:05:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.561 02:05:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:35.561 02:05:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.561 02:05:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.561 02:05:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.561 02:05:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.561 02:05:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.561 02:05:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.561 02:05:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.561 02:05:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@5 -- # export PATH 00:30:35.561 02:05:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- nvmf/common.sh@46 -- # : 0 00:30:35.561 02:05:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:35.561 02:05:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:35.561 02:05:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:35.561 02:05:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.561 02:05:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.561 02:05:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:35.561 02:05:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:35.561 02:05:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:35.561 02:05:21 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.561 02:05:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.561 02:05:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.561 02:05:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.561 02:05:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- paths/export.sh@5 -- # export PATH 00:30:35.561 02:05:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.561 02:05:21 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:35.561 02:05:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:35.561 02:05:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:35.561 02:05:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:35.561 02:05:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:35.561 02:05:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:35.561 02:05:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:35.561 02:05:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:35.561 02:05:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:35.561 02:05:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:35.561 02:05:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:35.561 02:05:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:35.561 02:05:21 -- common/autotest_common.sh@10 -- # set +x 00:30:37.489 02:05:22 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:37.489 02:05:22 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:37.489 02:05:22 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:37.489 02:05:22 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:37.489 02:05:22 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:37.489 02:05:22 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:37.489 02:05:22 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:37.489 02:05:22 -- nvmf/common.sh@294 -- # net_devs=() 00:30:37.489 02:05:22 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:37.489 02:05:22 -- nvmf/common.sh@295 -- # e810=() 00:30:37.489 02:05:22 -- nvmf/common.sh@295 -- # local -ga e810 00:30:37.489 02:05:22 -- nvmf/common.sh@296 -- # x722=() 00:30:37.489 02:05:22 -- nvmf/common.sh@296 -- # local -ga x722 00:30:37.489 02:05:22 -- nvmf/common.sh@297 -- # mlx=() 00:30:37.489 02:05:22 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:37.489 02:05:22 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:37.489 02:05:22 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:37.489 02:05:22 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:37.489 02:05:22 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:37.489 02:05:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:37.489 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:37.489 02:05:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:37.489 02:05:22 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:37.489 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:37.489 02:05:22 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:37.489 02:05:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.489 02:05:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.489 02:05:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:37.489 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:37.489 02:05:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.489 02:05:22 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:37.489 02:05:22 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:37.489 02:05:22 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:37.489 02:05:22 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:37.489 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:37.489 02:05:22 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:37.489 02:05:22 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:37.489 02:05:22 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:37.489 02:05:22 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:37.489 02:05:22 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:37.489 02:05:22 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:37.489 02:05:22 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:37.489 02:05:22 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:37.489 02:05:22 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:37.489 02:05:22 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:37.489 02:05:22 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:37.489 02:05:22 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:37.489 02:05:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:37.489 02:05:22 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:37.489 02:05:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:37.489 02:05:22 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:37.489 02:05:22 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:37.489 02:05:22 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:37.489 02:05:23 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:37.489 02:05:23 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:37.489 02:05:23 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:37.489 02:05:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:37.489 02:05:23 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:37.489 02:05:23 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:37.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:37.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:30:37.489 00:30:37.489 --- 10.0.0.2 ping statistics --- 00:30:37.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.489 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:30:37.489 02:05:23 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:37.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:37.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:30:37.489 00:30:37.489 --- 10.0.0.1 ping statistics --- 00:30:37.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:37.489 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:30:37.489 02:05:23 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:37.489 02:05:23 -- nvmf/common.sh@410 -- # return 0 00:30:37.489 02:05:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:30:37.489 02:05:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:37.489 02:05:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:37.489 02:05:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:37.489 02:05:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:37.489 02:05:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:37.489 02:05:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:37.489 02:05:23 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:37.489 02:05:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:37.489 02:05:23 -- common/autotest_common.sh@10 -- # set +x 00:30:37.489 02:05:23 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:37.489 02:05:23 -- common/autotest_common.sh@1509 -- # bdfs=() 00:30:37.489 02:05:23 -- common/autotest_common.sh@1509 -- # local bdfs 00:30:37.489 02:05:23 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:30:37.489 02:05:23 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:30:37.489 02:05:23 -- common/autotest_common.sh@1498 -- # bdfs=() 00:30:37.489 02:05:23 -- common/autotest_common.sh@1498 -- # local bdfs 00:30:37.489 02:05:23 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:37.489 02:05:23 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:37.489 02:05:23 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:30:37.749 02:05:23 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:30:37.749 02:05:23 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:88:00.0 00:30:37.749 02:05:23 -- common/autotest_common.sh@1512 -- # echo 0000:88:00.0 00:30:37.749 02:05:23 -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:30:37.749 02:05:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:30:37.749 02:05:23 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:37.749 02:05:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:37.749 02:05:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:37.749 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.933 02:05:27 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:30:41.933 02:05:27 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:30:41.933 02:05:27 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:41.933 02:05:27 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:30:41.933 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.116 02:05:31 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:30:46.116 02:05:31 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:30:46.116 02:05:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:46.116 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.116 02:05:31 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:30:46.116 02:05:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:46.116 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.116 02:05:31 -- target/identify_passthru.sh@31 -- # nvmfpid=2294656 00:30:46.116 02:05:31 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:30:46.116 02:05:31 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:46.116 02:05:31 -- target/identify_passthru.sh@35 -- # waitforlisten 2294656 00:30:46.116 02:05:31 -- common/autotest_common.sh@819 -- # '[' -z 2294656 ']' 00:30:46.116 02:05:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:46.116 02:05:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:46.116 02:05:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:46.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:46.116 02:05:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:46.116 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.116 [2024-04-15 02:05:31.605259] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:46.116 [2024-04-15 02:05:31.605356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:46.116 EAL: No free 2048 kB hugepages reported on node 1 00:30:46.116 [2024-04-15 02:05:31.672646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:46.374 [2024-04-15 02:05:31.763696] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:46.374 [2024-04-15 02:05:31.763828] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:46.374 [2024-04-15 02:05:31.763847] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:46.374 [2024-04-15 02:05:31.763860] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:46.374 [2024-04-15 02:05:31.763913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.374 [2024-04-15 02:05:31.763963] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:46.374 [2024-04-15 02:05:31.764012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:30:46.374 [2024-04-15 02:05:31.764014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:46.374 02:05:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:46.374 02:05:31 -- common/autotest_common.sh@852 -- # return 0 00:30:46.374 02:05:31 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:30:46.374 02:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.374 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.374 INFO: Log level set to 20 00:30:46.374 INFO: Requests: 00:30:46.374 { 00:30:46.374 "jsonrpc": "2.0", 00:30:46.374 "method": "nvmf_set_config", 00:30:46.374 "id": 1, 00:30:46.374 "params": { 00:30:46.374 "admin_cmd_passthru": { 00:30:46.374 "identify_ctrlr": true 00:30:46.374 } 00:30:46.374 } 00:30:46.374 } 00:30:46.374 00:30:46.374 INFO: response: 00:30:46.374 { 00:30:46.374 "jsonrpc": "2.0", 00:30:46.374 "id": 1, 00:30:46.374 "result": true 00:30:46.374 } 00:30:46.374 00:30:46.374 02:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.374 02:05:31 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:30:46.374 02:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.374 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.374 INFO: Setting log level to 20 00:30:46.374 INFO: Setting log level to 20 00:30:46.374 INFO: Log level set to 20 00:30:46.374 INFO: Log level set to 20 00:30:46.374 INFO: Requests: 00:30:46.374 { 00:30:46.374 "jsonrpc": "2.0", 00:30:46.374 "method": "framework_start_init", 00:30:46.374 "id": 1 00:30:46.374 } 00:30:46.374 00:30:46.375 INFO: Requests: 00:30:46.375 { 00:30:46.375 "jsonrpc": "2.0", 00:30:46.375 "method": "framework_start_init", 00:30:46.375 "id": 1 00:30:46.375 } 00:30:46.375 00:30:46.375 [2024-04-15 02:05:31.922430] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:30:46.375 INFO: response: 00:30:46.375 { 00:30:46.375 "jsonrpc": "2.0", 00:30:46.375 "id": 1, 00:30:46.375 "result": true 00:30:46.375 } 00:30:46.375 00:30:46.375 INFO: response: 00:30:46.375 { 00:30:46.375 "jsonrpc": "2.0", 00:30:46.375 "id": 1, 00:30:46.375 "result": true 00:30:46.375 } 00:30:46.375 00:30:46.375 02:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.375 02:05:31 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:46.375 02:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.375 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.375 INFO: Setting log level to 40 00:30:46.375 INFO: Setting log level to 40 00:30:46.375 INFO: Setting log level to 40 00:30:46.375 [2024-04-15 02:05:31.932560] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:46.375 02:05:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:46.375 02:05:31 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:30:46.375 02:05:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:46.375 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:46.375 02:05:31 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:30:46.375 02:05:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:46.375 02:05:31 -- common/autotest_common.sh@10 -- # set +x 00:30:49.652 Nvme0n1 00:30:49.652 02:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.652 02:05:34 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:30:49.652 02:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.652 02:05:34 -- common/autotest_common.sh@10 -- # set +x 00:30:49.652 02:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.652 02:05:34 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:49.652 02:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.652 02:05:34 -- common/autotest_common.sh@10 -- # set +x 00:30:49.652 02:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.652 02:05:34 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:49.652 02:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.652 02:05:34 -- common/autotest_common.sh@10 -- # set +x 00:30:49.652 [2024-04-15 02:05:34.829456] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:49.652 02:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.652 02:05:34 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:30:49.652 02:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.652 02:05:34 -- common/autotest_common.sh@10 -- # set +x 00:30:49.652 [2024-04-15 02:05:34.837174] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:30:49.652 [ 00:30:49.652 { 00:30:49.652 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:49.652 "subtype": "Discovery", 00:30:49.652 "listen_addresses": [], 00:30:49.652 "allow_any_host": true, 00:30:49.652 "hosts": [] 00:30:49.652 }, 00:30:49.652 { 00:30:49.652 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:49.652 "subtype": "NVMe", 00:30:49.652 "listen_addresses": [ 00:30:49.652 { 00:30:49.652 "transport": "TCP", 00:30:49.652 "trtype": "TCP", 00:30:49.652 "adrfam": "IPv4", 00:30:49.652 "traddr": "10.0.0.2", 00:30:49.652 "trsvcid": "4420" 00:30:49.652 } 00:30:49.652 ], 00:30:49.652 "allow_any_host": true, 00:30:49.652 "hosts": [], 00:30:49.652 "serial_number": "SPDK00000000000001", 00:30:49.652 "model_number": "SPDK bdev Controller", 00:30:49.652 "max_namespaces": 1, 00:30:49.652 "min_cntlid": 1, 00:30:49.652 "max_cntlid": 65519, 00:30:49.652 "namespaces": [ 00:30:49.652 { 00:30:49.652 "nsid": 1, 00:30:49.652 "bdev_name": "Nvme0n1", 00:30:49.652 "name": "Nvme0n1", 00:30:49.652 "nguid": "9C1CE8960C5F47AC89182B61AB23D3AE", 00:30:49.652 "uuid": "9c1ce896-0c5f-47ac-8918-2b61ab23d3ae" 00:30:49.652 } 00:30:49.652 ] 00:30:49.652 } 00:30:49.652 ] 00:30:49.652 02:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.652 02:05:34 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:49.652 02:05:34 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:30:49.652 02:05:34 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:30:49.652 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.652 02:05:35 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:30:49.652 02:05:35 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:30:49.652 02:05:35 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:30:49.652 02:05:35 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:30:49.652 EAL: No free 2048 kB hugepages reported on node 1 00:30:49.910 02:05:35 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:30:49.910 02:05:35 -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:30:49.910 02:05:35 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:30:49.910 02:05:35 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:49.910 02:05:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:49.910 02:05:35 -- common/autotest_common.sh@10 -- # set +x 00:30:49.910 02:05:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:49.910 02:05:35 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:30:49.910 02:05:35 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:30:49.910 02:05:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:49.910 02:05:35 -- nvmf/common.sh@116 -- # sync 00:30:49.910 02:05:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:30:49.910 02:05:35 -- nvmf/common.sh@119 -- # set +e 00:30:49.910 02:05:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:49.910 02:05:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:30:49.910 rmmod nvme_tcp 00:30:49.910 rmmod nvme_fabrics 00:30:49.910 rmmod nvme_keyring 00:30:49.910 02:05:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:49.910 02:05:35 -- nvmf/common.sh@123 -- # set -e 00:30:49.910 02:05:35 -- nvmf/common.sh@124 -- # return 0 00:30:49.910 02:05:35 -- nvmf/common.sh@477 -- # '[' -n 2294656 ']' 00:30:49.910 02:05:35 -- nvmf/common.sh@478 -- # killprocess 2294656 00:30:49.910 02:05:35 -- common/autotest_common.sh@926 -- # '[' -z 2294656 ']' 00:30:49.910 02:05:35 -- common/autotest_common.sh@930 -- # kill -0 2294656 00:30:49.910 02:05:35 -- common/autotest_common.sh@931 -- # uname 00:30:49.910 02:05:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:30:49.910 02:05:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2294656 00:30:49.910 02:05:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:30:49.910 02:05:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:30:49.910 02:05:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2294656' 00:30:49.910 killing process with pid 2294656 00:30:49.910 02:05:35 -- common/autotest_common.sh@945 -- # kill 2294656 00:30:49.910 [2024-04-15 02:05:35.440724] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:49.910 02:05:35 -- common/autotest_common.sh@950 -- # wait 2294656 00:30:51.809 02:05:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:51.809 02:05:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:30:51.809 02:05:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:30:51.809 02:05:36 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:51.809 02:05:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:30:51.809 02:05:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.809 02:05:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:51.809 02:05:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.715 02:05:39 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:30:53.715 00:30:53.715 real 0m18.065s 00:30:53.715 user 0m27.272s 00:30:53.715 sys 0m2.313s 00:30:53.715 02:05:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:53.715 02:05:39 -- common/autotest_common.sh@10 -- # set +x 00:30:53.715 ************************************ 00:30:53.715 END TEST nvmf_identify_passthru 00:30:53.715 ************************************ 00:30:53.715 02:05:39 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:53.715 02:05:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:53.715 02:05:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:53.715 02:05:39 -- common/autotest_common.sh@10 -- # set +x 00:30:53.715 ************************************ 00:30:53.715 START TEST nvmf_dif 00:30:53.715 ************************************ 00:30:53.715 02:05:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:30:53.715 * Looking for test storage... 00:30:53.715 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:53.715 02:05:39 -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:53.715 02:05:39 -- nvmf/common.sh@7 -- # uname -s 00:30:53.715 02:05:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:53.715 02:05:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:53.715 02:05:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:53.715 02:05:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:53.715 02:05:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:53.715 02:05:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:53.715 02:05:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:53.715 02:05:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:53.715 02:05:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:53.715 02:05:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:53.715 02:05:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:53.715 02:05:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:53.715 02:05:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:53.715 02:05:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:53.715 02:05:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:53.715 02:05:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:53.715 02:05:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:53.715 02:05:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:53.715 02:05:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:53.715 02:05:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.715 02:05:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.715 02:05:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.715 02:05:39 -- paths/export.sh@5 -- # export PATH 00:30:53.715 02:05:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:53.715 02:05:39 -- nvmf/common.sh@46 -- # : 0 00:30:53.715 02:05:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:30:53.715 02:05:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:30:53.715 02:05:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:30:53.715 02:05:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:53.715 02:05:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:53.715 02:05:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:30:53.715 02:05:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:30:53.715 02:05:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:30:53.715 02:05:39 -- target/dif.sh@15 -- # NULL_META=16 00:30:53.715 02:05:39 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:30:53.715 02:05:39 -- target/dif.sh@15 -- # NULL_SIZE=64 00:30:53.715 02:05:39 -- target/dif.sh@15 -- # NULL_DIF=1 00:30:53.715 02:05:39 -- target/dif.sh@135 -- # nvmftestinit 00:30:53.715 02:05:39 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:30:53.715 02:05:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:53.715 02:05:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:30:53.715 02:05:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:30:53.715 02:05:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:30:53.715 02:05:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:53.715 02:05:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:53.715 02:05:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:53.715 02:05:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:30:53.715 02:05:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:30:53.715 02:05:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:30:53.715 02:05:39 -- common/autotest_common.sh@10 -- # set +x 00:30:55.616 02:05:40 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:30:55.616 02:05:40 -- nvmf/common.sh@290 -- # pci_devs=() 00:30:55.616 02:05:40 -- nvmf/common.sh@290 -- # local -a pci_devs 00:30:55.616 02:05:40 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:30:55.616 02:05:40 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:30:55.616 02:05:40 -- nvmf/common.sh@292 -- # pci_drivers=() 00:30:55.616 02:05:40 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:30:55.616 02:05:40 -- nvmf/common.sh@294 -- # net_devs=() 00:30:55.616 02:05:40 -- nvmf/common.sh@294 -- # local -ga net_devs 00:30:55.616 02:05:40 -- nvmf/common.sh@295 -- # e810=() 00:30:55.616 02:05:40 -- nvmf/common.sh@295 -- # local -ga e810 00:30:55.616 02:05:40 -- nvmf/common.sh@296 -- # x722=() 00:30:55.616 02:05:40 -- nvmf/common.sh@296 -- # local -ga x722 00:30:55.616 02:05:40 -- nvmf/common.sh@297 -- # mlx=() 00:30:55.616 02:05:40 -- nvmf/common.sh@297 -- # local -ga mlx 00:30:55.616 02:05:40 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.616 02:05:40 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:30:55.616 02:05:40 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:30:55.616 02:05:40 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:30:55.616 02:05:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:55.616 02:05:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:55.616 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:55.616 02:05:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:30:55.616 02:05:40 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:55.616 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:55.616 02:05:40 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:30:55.616 02:05:40 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:30:55.616 02:05:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:55.616 02:05:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.616 02:05:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:55.616 02:05:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.616 02:05:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:55.616 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:55.616 02:05:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.616 02:05:40 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:30:55.616 02:05:40 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:55.616 02:05:40 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:30:55.616 02:05:40 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:55.616 02:05:40 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:55.616 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:55.616 02:05:40 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:30:55.616 02:05:40 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:30:55.616 02:05:40 -- nvmf/common.sh@402 -- # is_hw=yes 00:30:55.616 02:05:40 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:30:55.617 02:05:40 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:30:55.617 02:05:40 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:30:55.617 02:05:40 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:55.617 02:05:40 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:55.617 02:05:40 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:55.617 02:05:40 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:30:55.617 02:05:40 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:55.617 02:05:40 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:55.617 02:05:40 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:30:55.617 02:05:40 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:55.617 02:05:40 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:55.617 02:05:40 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:30:55.617 02:05:40 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:30:55.617 02:05:40 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:30:55.617 02:05:40 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:55.617 02:05:41 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:55.617 02:05:41 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:55.617 02:05:41 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:30:55.617 02:05:41 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:55.617 02:05:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:55.617 02:05:41 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:55.617 02:05:41 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:30:55.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:55.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:30:55.617 00:30:55.617 --- 10.0.0.2 ping statistics --- 00:30:55.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.617 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:30:55.617 02:05:41 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:55.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:55.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:30:55.617 00:30:55.617 --- 10.0.0.1 ping statistics --- 00:30:55.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:55.617 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:30:55.617 02:05:41 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:55.617 02:05:41 -- nvmf/common.sh@410 -- # return 0 00:30:55.617 02:05:41 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:30:55.617 02:05:41 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:56.560 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:56.560 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:30:56.560 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:56.560 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:56.560 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:56.560 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:56.560 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:56.560 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:56.560 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:56.560 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:30:56.560 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:30:56.560 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:30:56.560 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:30:56.560 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:30:56.560 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:30:56.560 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:30:56.560 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:30:56.840 02:05:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.840 02:05:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:30:56.840 02:05:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:30:56.840 02:05:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.840 02:05:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:30:56.840 02:05:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:30:56.840 02:05:42 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:30:56.840 02:05:42 -- target/dif.sh@137 -- # nvmfappstart 00:30:56.840 02:05:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:30:56.840 02:05:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:30:56.840 02:05:42 -- common/autotest_common.sh@10 -- # set +x 00:30:56.840 02:05:42 -- nvmf/common.sh@469 -- # nvmfpid=2297847 00:30:56.840 02:05:42 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:30:56.840 02:05:42 -- nvmf/common.sh@470 -- # waitforlisten 2297847 00:30:56.840 02:05:42 -- common/autotest_common.sh@819 -- # '[' -z 2297847 ']' 00:30:56.840 02:05:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.840 02:05:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:30:56.840 02:05:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.840 02:05:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:30:56.840 02:05:42 -- common/autotest_common.sh@10 -- # set +x 00:30:56.840 [2024-04-15 02:05:42.354337] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:30:56.840 [2024-04-15 02:05:42.354431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.840 EAL: No free 2048 kB hugepages reported on node 1 00:30:56.840 [2024-04-15 02:05:42.424137] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:57.098 [2024-04-15 02:05:42.512919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:30:57.098 [2024-04-15 02:05:42.513244] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:57.098 [2024-04-15 02:05:42.513285] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:57.098 [2024-04-15 02:05:42.513299] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:57.098 [2024-04-15 02:05:42.513329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:57.663 02:05:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:30:57.663 02:05:43 -- common/autotest_common.sh@852 -- # return 0 00:30:57.663 02:05:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:30:57.663 02:05:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:30:57.663 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.922 02:05:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:57.922 02:05:43 -- target/dif.sh@139 -- # create_transport 00:30:57.922 02:05:43 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:30:57.922 02:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.922 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.922 [2024-04-15 02:05:43.334855] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:57.922 02:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.922 02:05:43 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:30:57.922 02:05:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:30:57.922 02:05:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:30:57.922 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.922 ************************************ 00:30:57.922 START TEST fio_dif_1_default 00:30:57.922 ************************************ 00:30:57.923 02:05:43 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:30:57.923 02:05:43 -- target/dif.sh@86 -- # create_subsystems 0 00:30:57.923 02:05:43 -- target/dif.sh@28 -- # local sub 00:30:57.923 02:05:43 -- target/dif.sh@30 -- # for sub in "$@" 00:30:57.923 02:05:43 -- target/dif.sh@31 -- # create_subsystem 0 00:30:57.923 02:05:43 -- target/dif.sh@18 -- # local sub_id=0 00:30:57.923 02:05:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:57.923 02:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.923 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 bdev_null0 00:30:57.923 02:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.923 02:05:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:57.923 02:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.923 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 02:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.923 02:05:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:57.923 02:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.923 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 02:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.923 02:05:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:57.923 02:05:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:30:57.923 02:05:43 -- common/autotest_common.sh@10 -- # set +x 00:30:57.923 [2024-04-15 02:05:43.375129] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:57.923 02:05:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:30:57.923 02:05:43 -- target/dif.sh@87 -- # fio /dev/fd/62 00:30:57.923 02:05:43 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:30:57.923 02:05:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:57.923 02:05:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.923 02:05:43 -- nvmf/common.sh@520 -- # config=() 00:30:57.923 02:05:43 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:57.923 02:05:43 -- nvmf/common.sh@520 -- # local subsystem config 00:30:57.923 02:05:43 -- target/dif.sh@82 -- # gen_fio_conf 00:30:57.923 02:05:43 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:30:57.923 02:05:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:30:57.923 02:05:43 -- target/dif.sh@54 -- # local file 00:30:57.923 02:05:43 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:57.923 02:05:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:30:57.923 { 00:30:57.923 "params": { 00:30:57.923 "name": "Nvme$subsystem", 00:30:57.923 "trtype": "$TEST_TRANSPORT", 00:30:57.923 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:57.923 "adrfam": "ipv4", 00:30:57.923 "trsvcid": "$NVMF_PORT", 00:30:57.923 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:57.923 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:57.923 "hdgst": ${hdgst:-false}, 00:30:57.923 "ddgst": ${ddgst:-false} 00:30:57.923 }, 00:30:57.923 "method": "bdev_nvme_attach_controller" 00:30:57.923 } 00:30:57.923 EOF 00:30:57.923 )") 00:30:57.923 02:05:43 -- common/autotest_common.sh@1318 -- # local sanitizers 00:30:57.923 02:05:43 -- target/dif.sh@56 -- # cat 00:30:57.923 02:05:43 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.923 02:05:43 -- common/autotest_common.sh@1320 -- # shift 00:30:57.923 02:05:43 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:30:57.923 02:05:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.923 02:05:43 -- nvmf/common.sh@542 -- # cat 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # grep libasan 00:30:57.923 02:05:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:30:57.923 02:05:43 -- target/dif.sh@72 -- # (( file <= files )) 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:57.923 02:05:43 -- nvmf/common.sh@544 -- # jq . 00:30:57.923 02:05:43 -- nvmf/common.sh@545 -- # IFS=, 00:30:57.923 02:05:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:30:57.923 "params": { 00:30:57.923 "name": "Nvme0", 00:30:57.923 "trtype": "tcp", 00:30:57.923 "traddr": "10.0.0.2", 00:30:57.923 "adrfam": "ipv4", 00:30:57.923 "trsvcid": "4420", 00:30:57.923 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:57.923 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:57.923 "hdgst": false, 00:30:57.923 "ddgst": false 00:30:57.923 }, 00:30:57.923 "method": "bdev_nvme_attach_controller" 00:30:57.923 }' 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:57.923 02:05:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:57.923 02:05:43 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:30:57.923 02:05:43 -- common/autotest_common.sh@1324 -- # asan_lib= 00:30:57.923 02:05:43 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:30:57.923 02:05:43 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:57.923 02:05:43 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:58.181 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:30:58.181 fio-3.35 00:30:58.181 Starting 1 thread 00:30:58.181 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.747 [2024-04-15 02:05:44.088545] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:30:58.747 [2024-04-15 02:05:44.088602] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:08.712 00:31:08.712 filename0: (groupid=0, jobs=1): err= 0: pid=2298209: Mon Apr 15 02:05:54 2024 00:31:08.712 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10002msec) 00:31:08.712 slat (nsec): min=6682, max=50642, avg=9097.69, stdev=4099.20 00:31:08.712 clat (usec): min=41766, max=46949, avg=41997.54, stdev=328.01 00:31:08.712 lat (usec): min=41773, max=46979, avg=42006.64, stdev=328.40 00:31:08.712 clat percentiles (usec): 00:31:08.712 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:08.712 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:08.712 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:08.712 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:31:08.712 | 99.99th=[46924] 00:31:08.712 bw ( KiB/s): min= 352, max= 384, per=99.81%, avg=380.63, stdev=10.09, samples=19 00:31:08.712 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:08.712 lat (msec) : 50=100.00% 00:31:08.712 cpu : usr=90.53%, sys=9.19%, ctx=18, majf=0, minf=178 00:31:08.712 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:08.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.712 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.712 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.712 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:08.712 00:31:08.712 Run status group 0 (all jobs): 00:31:08.712 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3808KiB (3899kB), run=10002-10002msec 00:31:08.971 02:05:54 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:08.971 02:05:54 -- target/dif.sh@43 -- # local sub 00:31:08.971 02:05:54 -- target/dif.sh@45 -- # for sub in "$@" 00:31:08.971 02:05:54 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:08.971 02:05:54 -- target/dif.sh@36 -- # local sub_id=0 00:31:08.971 02:05:54 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 00:31:08.971 real 0m11.148s 00:31:08.971 user 0m9.992s 00:31:08.971 sys 0m1.204s 00:31:08.971 02:05:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 ************************************ 00:31:08.971 END TEST fio_dif_1_default 00:31:08.971 ************************************ 00:31:08.971 02:05:54 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:08.971 02:05:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:08.971 02:05:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 ************************************ 00:31:08.971 START TEST fio_dif_1_multi_subsystems 00:31:08.971 ************************************ 00:31:08.971 02:05:54 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:31:08.971 02:05:54 -- target/dif.sh@92 -- # local files=1 00:31:08.971 02:05:54 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:08.971 02:05:54 -- target/dif.sh@28 -- # local sub 00:31:08.971 02:05:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.971 02:05:54 -- target/dif.sh@31 -- # create_subsystem 0 00:31:08.971 02:05:54 -- target/dif.sh@18 -- # local sub_id=0 00:31:08.971 02:05:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 bdev_null0 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 [2024-04-15 02:05:54.552256] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@30 -- # for sub in "$@" 00:31:08.971 02:05:54 -- target/dif.sh@31 -- # create_subsystem 1 00:31:08.971 02:05:54 -- target/dif.sh@18 -- # local sub_id=1 00:31:08.971 02:05:54 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 bdev_null1 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:08.971 02:05:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:08.971 02:05:54 -- common/autotest_common.sh@10 -- # set +x 00:31:08.971 02:05:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:08.971 02:05:54 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:08.972 02:05:54 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:08.972 02:05:54 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:08.972 02:05:54 -- nvmf/common.sh@520 -- # config=() 00:31:08.972 02:05:54 -- nvmf/common.sh@520 -- # local subsystem config 00:31:08.972 02:05:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:08.972 02:05:54 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.972 02:05:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:08.972 { 00:31:08.972 "params": { 00:31:08.972 "name": "Nvme$subsystem", 00:31:08.972 "trtype": "$TEST_TRANSPORT", 00:31:08.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.972 "adrfam": "ipv4", 00:31:08.972 "trsvcid": "$NVMF_PORT", 00:31:08.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.972 "hdgst": ${hdgst:-false}, 00:31:08.972 "ddgst": ${ddgst:-false} 00:31:08.972 }, 00:31:08.972 "method": "bdev_nvme_attach_controller" 00:31:08.972 } 00:31:08.972 EOF 00:31:08.972 )") 00:31:08.972 02:05:54 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:08.972 02:05:54 -- target/dif.sh@82 -- # gen_fio_conf 00:31:08.972 02:05:54 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:08.972 02:05:54 -- target/dif.sh@54 -- # local file 00:31:08.972 02:05:54 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:08.972 02:05:54 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:08.972 02:05:54 -- target/dif.sh@56 -- # cat 00:31:08.972 02:05:54 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.972 02:05:54 -- common/autotest_common.sh@1320 -- # shift 00:31:08.972 02:05:54 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:08.972 02:05:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.972 02:05:54 -- nvmf/common.sh@542 -- # cat 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.972 02:05:54 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:08.972 02:05:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.972 02:05:54 -- target/dif.sh@73 -- # cat 00:31:08.972 02:05:54 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:08.972 02:05:54 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:08.972 { 00:31:08.972 "params": { 00:31:08.972 "name": "Nvme$subsystem", 00:31:08.972 "trtype": "$TEST_TRANSPORT", 00:31:08.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:08.972 "adrfam": "ipv4", 00:31:08.972 "trsvcid": "$NVMF_PORT", 00:31:08.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:08.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:08.972 "hdgst": ${hdgst:-false}, 00:31:08.972 "ddgst": ${ddgst:-false} 00:31:08.972 }, 00:31:08.972 "method": "bdev_nvme_attach_controller" 00:31:08.972 } 00:31:08.972 EOF 00:31:08.972 )") 00:31:08.972 02:05:54 -- target/dif.sh@72 -- # (( file++ )) 00:31:08.972 02:05:54 -- nvmf/common.sh@542 -- # cat 00:31:08.972 02:05:54 -- target/dif.sh@72 -- # (( file <= files )) 00:31:08.972 02:05:54 -- nvmf/common.sh@544 -- # jq . 00:31:08.972 02:05:54 -- nvmf/common.sh@545 -- # IFS=, 00:31:08.972 02:05:54 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:08.972 "params": { 00:31:08.972 "name": "Nvme0", 00:31:08.972 "trtype": "tcp", 00:31:08.972 "traddr": "10.0.0.2", 00:31:08.972 "adrfam": "ipv4", 00:31:08.972 "trsvcid": "4420", 00:31:08.972 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:08.972 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:08.972 "hdgst": false, 00:31:08.972 "ddgst": false 00:31:08.972 }, 00:31:08.972 "method": "bdev_nvme_attach_controller" 00:31:08.972 },{ 00:31:08.972 "params": { 00:31:08.972 "name": "Nvme1", 00:31:08.972 "trtype": "tcp", 00:31:08.972 "traddr": "10.0.0.2", 00:31:08.972 "adrfam": "ipv4", 00:31:08.972 "trsvcid": "4420", 00:31:08.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:08.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:08.972 "hdgst": false, 00:31:08.972 "ddgst": false 00:31:08.972 }, 00:31:08.972 "method": "bdev_nvme_attach_controller" 00:31:08.972 }' 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:08.972 02:05:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:08.972 02:05:54 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:08.972 02:05:54 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:09.230 02:05:54 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:09.230 02:05:54 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:09.230 02:05:54 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:09.230 02:05:54 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:09.230 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.230 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:09.230 fio-3.35 00:31:09.230 Starting 2 threads 00:31:09.488 EAL: No free 2048 kB hugepages reported on node 1 00:31:10.054 [2024-04-15 02:05:55.437574] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:10.054 [2024-04-15 02:05:55.437636] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:20.023 00:31:20.023 filename0: (groupid=0, jobs=1): err= 0: pid=2299648: Mon Apr 15 02:06:05 2024 00:31:20.023 read: IOPS=95, BW=381KiB/s (390kB/s)(3808KiB/10001msec) 00:31:20.023 slat (nsec): min=6661, max=34063, avg=9741.27, stdev=4906.32 00:31:20.023 clat (usec): min=41854, max=44532, avg=41988.64, stdev=190.24 00:31:20.023 lat (usec): min=41861, max=44561, avg=41998.38, stdev=190.63 00:31:20.023 clat percentiles (usec): 00:31:20.023 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:20.023 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:20.023 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:20.023 | 99.00th=[42730], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:31:20.023 | 99.99th=[44303] 00:31:20.023 bw ( KiB/s): min= 352, max= 384, per=33.85%, avg=380.63, stdev=10.09, samples=19 00:31:20.023 iops : min= 88, max= 96, avg=95.16, stdev= 2.52, samples=19 00:31:20.023 lat (msec) : 50=100.00% 00:31:20.023 cpu : usr=94.90%, sys=4.79%, ctx=15, majf=0, minf=136 00:31:20.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.023 issued rwts: total=952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:20.023 filename1: (groupid=0, jobs=1): err= 0: pid=2299649: Mon Apr 15 02:06:05 2024 00:31:20.023 read: IOPS=185, BW=743KiB/s (760kB/s)(7440KiB/10020msec) 00:31:20.023 slat (nsec): min=6562, max=32728, avg=9970.40, stdev=5450.98 00:31:20.023 clat (usec): min=1129, max=43541, avg=21515.88, stdev=20203.54 00:31:20.023 lat (usec): min=1136, max=43565, avg=21525.85, stdev=20201.98 00:31:20.023 clat percentiles (usec): 00:31:20.023 | 1.00th=[ 1188], 5.00th=[ 1221], 10.00th=[ 1237], 20.00th=[ 1270], 00:31:20.023 | 30.00th=[ 1287], 40.00th=[ 1303], 50.00th=[41681], 60.00th=[41681], 00:31:20.023 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:31:20.023 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:31:20.023 | 99.99th=[43779] 00:31:20.023 bw ( KiB/s): min= 704, max= 768, per=66.10%, avg=742.40, stdev=30.45, samples=20 00:31:20.023 iops : min= 176, max= 192, avg=185.60, stdev= 7.61, samples=20 00:31:20.023 lat (msec) : 2=49.89%, 50=50.11% 00:31:20.023 cpu : usr=95.20%, sys=4.48%, ctx=13, majf=0, minf=191 00:31:20.023 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:20.023 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.023 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:20.023 issued rwts: total=1860,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:20.023 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:20.023 00:31:20.023 Run status group 0 (all jobs): 00:31:20.023 READ: bw=1123KiB/s (1149kB/s), 381KiB/s-743KiB/s (390kB/s-760kB/s), io=11.0MiB (11.5MB), run=10001-10020msec 00:31:20.281 02:06:05 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:20.281 02:06:05 -- target/dif.sh@43 -- # local sub 00:31:20.281 02:06:05 -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.282 02:06:05 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:20.282 02:06:05 -- target/dif.sh@36 -- # local sub_id=0 00:31:20.282 02:06:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@45 -- # for sub in "$@" 00:31:20.282 02:06:05 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:20.282 02:06:05 -- target/dif.sh@36 -- # local sub_id=1 00:31:20.282 02:06:05 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 00:31:20.282 real 0m11.349s 00:31:20.282 user 0m20.426s 00:31:20.282 sys 0m1.261s 00:31:20.282 02:06:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 ************************************ 00:31:20.282 END TEST fio_dif_1_multi_subsystems 00:31:20.282 ************************************ 00:31:20.282 02:06:05 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:20.282 02:06:05 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:20.282 02:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 ************************************ 00:31:20.282 START TEST fio_dif_rand_params 00:31:20.282 ************************************ 00:31:20.282 02:06:05 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:31:20.282 02:06:05 -- target/dif.sh@100 -- # local NULL_DIF 00:31:20.282 02:06:05 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:20.282 02:06:05 -- target/dif.sh@103 -- # NULL_DIF=3 00:31:20.282 02:06:05 -- target/dif.sh@103 -- # bs=128k 00:31:20.282 02:06:05 -- target/dif.sh@103 -- # numjobs=3 00:31:20.282 02:06:05 -- target/dif.sh@103 -- # iodepth=3 00:31:20.282 02:06:05 -- target/dif.sh@103 -- # runtime=5 00:31:20.282 02:06:05 -- target/dif.sh@105 -- # create_subsystems 0 00:31:20.282 02:06:05 -- target/dif.sh@28 -- # local sub 00:31:20.282 02:06:05 -- target/dif.sh@30 -- # for sub in "$@" 00:31:20.282 02:06:05 -- target/dif.sh@31 -- # create_subsystem 0 00:31:20.282 02:06:05 -- target/dif.sh@18 -- # local sub_id=0 00:31:20.282 02:06:05 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 bdev_null0 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:20.282 02:06:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:20.282 02:06:05 -- common/autotest_common.sh@10 -- # set +x 00:31:20.282 [2024-04-15 02:06:05.921681] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.282 02:06:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:20.282 02:06:05 -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:20.282 02:06:05 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:20.282 02:06:05 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:20.282 02:06:05 -- nvmf/common.sh@520 -- # config=() 00:31:20.282 02:06:05 -- nvmf/common.sh@520 -- # local subsystem config 00:31:20.282 02:06:05 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:20.282 02:06:05 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:20.282 { 00:31:20.282 "params": { 00:31:20.282 "name": "Nvme$subsystem", 00:31:20.282 "trtype": "$TEST_TRANSPORT", 00:31:20.282 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:20.282 "adrfam": "ipv4", 00:31:20.282 "trsvcid": "$NVMF_PORT", 00:31:20.282 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:20.282 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:20.282 "hdgst": ${hdgst:-false}, 00:31:20.282 "ddgst": ${ddgst:-false} 00:31:20.282 }, 00:31:20.282 "method": "bdev_nvme_attach_controller" 00:31:20.282 } 00:31:20.282 EOF 00:31:20.282 )") 00:31:20.282 02:06:05 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.282 02:06:05 -- target/dif.sh@82 -- # gen_fio_conf 00:31:20.282 02:06:05 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.282 02:06:05 -- target/dif.sh@54 -- # local file 00:31:20.282 02:06:05 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:20.282 02:06:05 -- target/dif.sh@56 -- # cat 00:31:20.282 02:06:05 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:20.282 02:06:05 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:20.282 02:06:05 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.282 02:06:05 -- common/autotest_common.sh@1320 -- # shift 00:31:20.282 02:06:05 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:20.282 02:06:05 -- nvmf/common.sh@542 -- # cat 00:31:20.282 02:06:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.540 02:06:05 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:20.540 02:06:05 -- target/dif.sh@72 -- # (( file <= files )) 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:20.540 02:06:05 -- nvmf/common.sh@544 -- # jq . 00:31:20.540 02:06:05 -- nvmf/common.sh@545 -- # IFS=, 00:31:20.540 02:06:05 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:20.540 "params": { 00:31:20.540 "name": "Nvme0", 00:31:20.540 "trtype": "tcp", 00:31:20.540 "traddr": "10.0.0.2", 00:31:20.540 "adrfam": "ipv4", 00:31:20.540 "trsvcid": "4420", 00:31:20.540 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.540 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:20.540 "hdgst": false, 00:31:20.540 "ddgst": false 00:31:20.540 }, 00:31:20.540 "method": "bdev_nvme_attach_controller" 00:31:20.540 }' 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:20.540 02:06:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:20.540 02:06:05 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:20.540 02:06:05 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:20.540 02:06:05 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:20.540 02:06:05 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:20.540 02:06:05 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:20.540 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:20.540 ... 00:31:20.540 fio-3.35 00:31:20.540 Starting 3 threads 00:31:20.800 EAL: No free 2048 kB hugepages reported on node 1 00:31:21.058 [2024-04-15 02:06:06.603194] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:21.058 [2024-04-15 02:06:06.603274] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:26.323 00:31:26.323 filename0: (groupid=0, jobs=1): err= 0: pid=2301194: Mon Apr 15 02:06:11 2024 00:31:26.323 read: IOPS=148, BW=18.6MiB/s (19.5MB/s)(93.6MiB/5027msec) 00:31:26.323 slat (nsec): min=7272, max=35758, avg=11916.20, stdev=3586.01 00:31:26.323 clat (usec): min=7831, max=94048, avg=20113.45, stdev=17418.09 00:31:26.323 lat (usec): min=7842, max=94062, avg=20125.36, stdev=17418.07 00:31:26.323 clat percentiles (usec): 00:31:26.323 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9896], 00:31:26.323 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11863], 60.00th=[12911], 00:31:26.323 | 70.00th=[13960], 80.00th=[50070], 90.00th=[52691], 95.00th=[54264], 00:31:26.323 | 99.00th=[55837], 99.50th=[57410], 99.90th=[93848], 99.95th=[93848], 00:31:26.323 | 99.99th=[93848] 00:31:26.323 bw ( KiB/s): min=13824, max=23808, per=29.84%, avg=19097.60, stdev=2784.01, samples=10 00:31:26.323 iops : min= 108, max= 186, avg=149.20, stdev=21.75, samples=10 00:31:26.323 lat (msec) : 10=21.50%, 20=57.68%, 50=0.67%, 100=20.16% 00:31:26.323 cpu : usr=91.42%, sys=7.98%, ctx=12, majf=0, minf=98 00:31:26.323 IO depths : 1=3.3%, 2=96.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 issued rwts: total=749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.323 filename0: (groupid=0, jobs=1): err= 0: pid=2301195: Mon Apr 15 02:06:11 2024 00:31:26.323 read: IOPS=156, BW=19.5MiB/s (20.5MB/s)(98.1MiB/5023msec) 00:31:26.323 slat (nsec): min=7220, max=36704, avg=12229.12, stdev=3981.84 00:31:26.323 clat (usec): min=7468, max=94622, avg=19175.59, stdev=16328.18 00:31:26.323 lat (usec): min=7479, max=94636, avg=19187.82, stdev=16328.45 00:31:26.323 clat percentiles (usec): 00:31:26.323 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[10028], 00:31:26.323 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11731], 60.00th=[12780], 00:31:26.323 | 70.00th=[13829], 80.00th=[15664], 90.00th=[52167], 95.00th=[53740], 00:31:26.323 | 99.00th=[55837], 99.50th=[56886], 99.90th=[94897], 99.95th=[94897], 00:31:26.323 | 99.99th=[94897] 00:31:26.323 bw ( KiB/s): min=13824, max=24320, per=31.29%, avg=20022.80, stdev=3470.32, samples=10 00:31:26.323 iops : min= 108, max= 190, avg=156.40, stdev=27.13, samples=10 00:31:26.323 lat (msec) : 10=20.00%, 20=61.27%, 50=0.51%, 100=18.22% 00:31:26.323 cpu : usr=91.84%, sys=7.53%, ctx=7, majf=0, minf=76 00:31:26.323 IO depths : 1=3.2%, 2=96.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 issued rwts: total=785,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.323 filename0: (groupid=0, jobs=1): err= 0: pid=2301196: Mon Apr 15 02:06:11 2024 00:31:26.323 read: IOPS=195, BW=24.5MiB/s (25.6MB/s)(122MiB/5005msec) 00:31:26.323 slat (nsec): min=7305, max=64639, avg=12635.16, stdev=4099.85 00:31:26.323 clat (usec): min=6785, max=95371, avg=15315.64, stdev=13848.99 00:31:26.323 lat (usec): min=6812, max=95385, avg=15328.28, stdev=13848.82 00:31:26.323 clat percentiles (usec): 00:31:26.323 | 1.00th=[ 7308], 5.00th=[ 7635], 10.00th=[ 8094], 20.00th=[ 8979], 00:31:26.323 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[10683], 60.00th=[11338], 00:31:26.323 | 70.00th=[12256], 80.00th=[13566], 90.00th=[50594], 95.00th=[52167], 00:31:26.323 | 99.00th=[54264], 99.50th=[55313], 99.90th=[94897], 99.95th=[94897], 00:31:26.323 | 99.99th=[94897] 00:31:26.323 bw ( KiB/s): min=17408, max=36352, per=39.05%, avg=24985.60, stdev=6119.30, samples=10 00:31:26.323 iops : min= 136, max= 284, avg=195.20, stdev=47.81, samples=10 00:31:26.323 lat (msec) : 10=35.44%, 20=53.63%, 50=0.61%, 100=10.32% 00:31:26.323 cpu : usr=90.17%, sys=9.05%, ctx=11, majf=0, minf=138 00:31:26.323 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:26.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:26.323 issued rwts: total=979,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:26.323 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:26.323 00:31:26.323 Run status group 0 (all jobs): 00:31:26.323 READ: bw=62.5MiB/s (65.5MB/s), 18.6MiB/s-24.5MiB/s (19.5MB/s-25.6MB/s), io=314MiB (329MB), run=5005-5027msec 00:31:26.615 02:06:12 -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:26.615 02:06:12 -- target/dif.sh@43 -- # local sub 00:31:26.615 02:06:12 -- target/dif.sh@45 -- # for sub in "$@" 00:31:26.615 02:06:12 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:26.615 02:06:12 -- target/dif.sh@36 -- # local sub_id=0 00:31:26.615 02:06:12 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # NULL_DIF=2 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # bs=4k 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # numjobs=8 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # iodepth=16 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # runtime= 00:31:26.615 02:06:12 -- target/dif.sh@109 -- # files=2 00:31:26.615 02:06:12 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:26.615 02:06:12 -- target/dif.sh@28 -- # local sub 00:31:26.615 02:06:12 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.615 02:06:12 -- target/dif.sh@31 -- # create_subsystem 0 00:31:26.615 02:06:12 -- target/dif.sh@18 -- # local sub_id=0 00:31:26.615 02:06:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 bdev_null0 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 [2024-04-15 02:06:12.068881] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.615 02:06:12 -- target/dif.sh@31 -- # create_subsystem 1 00:31:26.615 02:06:12 -- target/dif.sh@18 -- # local sub_id=1 00:31:26.615 02:06:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 bdev_null1 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@30 -- # for sub in "$@" 00:31:26.615 02:06:12 -- target/dif.sh@31 -- # create_subsystem 2 00:31:26.615 02:06:12 -- target/dif.sh@18 -- # local sub_id=2 00:31:26.615 02:06:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 bdev_null2 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.615 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.615 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.615 02:06:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:26.615 02:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:26.616 02:06:12 -- common/autotest_common.sh@10 -- # set +x 00:31:26.616 02:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:26.616 02:06:12 -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:26.616 02:06:12 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:26.616 02:06:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:26.616 02:06:12 -- nvmf/common.sh@520 -- # config=() 00:31:26.616 02:06:12 -- nvmf/common.sh@520 -- # local subsystem config 00:31:26.616 02:06:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:26.616 { 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme$subsystem", 00:31:26.616 "trtype": "$TEST_TRANSPORT", 00:31:26.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "$NVMF_PORT", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.616 "hdgst": ${hdgst:-false}, 00:31:26.616 "ddgst": ${ddgst:-false} 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 } 00:31:26.616 EOF 00:31:26.616 )") 00:31:26.616 02:06:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.616 02:06:12 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.616 02:06:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:26.616 02:06:12 -- target/dif.sh@82 -- # gen_fio_conf 00:31:26.616 02:06:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:26.616 02:06:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:26.616 02:06:12 -- target/dif.sh@54 -- # local file 00:31:26.616 02:06:12 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.616 02:06:12 -- target/dif.sh@56 -- # cat 00:31:26.616 02:06:12 -- common/autotest_common.sh@1320 -- # shift 00:31:26.616 02:06:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:26.616 02:06:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # cat 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.616 02:06:12 -- target/dif.sh@73 -- # cat 00:31:26.616 02:06:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:26.616 { 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme$subsystem", 00:31:26.616 "trtype": "$TEST_TRANSPORT", 00:31:26.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "$NVMF_PORT", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.616 "hdgst": ${hdgst:-false}, 00:31:26.616 "ddgst": ${ddgst:-false} 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 } 00:31:26.616 EOF 00:31:26.616 )") 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # cat 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file++ )) 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.616 02:06:12 -- target/dif.sh@73 -- # cat 00:31:26.616 02:06:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:26.616 { 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme$subsystem", 00:31:26.616 "trtype": "$TEST_TRANSPORT", 00:31:26.616 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "$NVMF_PORT", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:26.616 "hdgst": ${hdgst:-false}, 00:31:26.616 "ddgst": ${ddgst:-false} 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 } 00:31:26.616 EOF 00:31:26.616 )") 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file++ )) 00:31:26.616 02:06:12 -- target/dif.sh@72 -- # (( file <= files )) 00:31:26.616 02:06:12 -- nvmf/common.sh@542 -- # cat 00:31:26.616 02:06:12 -- nvmf/common.sh@544 -- # jq . 00:31:26.616 02:06:12 -- nvmf/common.sh@545 -- # IFS=, 00:31:26.616 02:06:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme0", 00:31:26.616 "trtype": "tcp", 00:31:26.616 "traddr": "10.0.0.2", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "4420", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:26.616 "hdgst": false, 00:31:26.616 "ddgst": false 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 },{ 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme1", 00:31:26.616 "trtype": "tcp", 00:31:26.616 "traddr": "10.0.0.2", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "4420", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:26.616 "hdgst": false, 00:31:26.616 "ddgst": false 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 },{ 00:31:26.616 "params": { 00:31:26.616 "name": "Nvme2", 00:31:26.616 "trtype": "tcp", 00:31:26.616 "traddr": "10.0.0.2", 00:31:26.616 "adrfam": "ipv4", 00:31:26.616 "trsvcid": "4420", 00:31:26.616 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:26.616 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:26.616 "hdgst": false, 00:31:26.616 "ddgst": false 00:31:26.616 }, 00:31:26.616 "method": "bdev_nvme_attach_controller" 00:31:26.616 }' 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:26.616 02:06:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:26.616 02:06:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:26.616 02:06:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:26.616 02:06:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:26.616 02:06:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:26.616 02:06:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:26.875 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.875 ... 00:31:26.875 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.875 ... 00:31:26.875 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:26.875 ... 00:31:26.875 fio-3.35 00:31:26.875 Starting 24 threads 00:31:26.875 EAL: No free 2048 kB hugepages reported on node 1 00:31:27.810 [2024-04-15 02:06:13.176736] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:27.810 [2024-04-15 02:06:13.176804] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:37.790 00:31:37.790 filename0: (groupid=0, jobs=1): err= 0: pid=2302592: Mon Apr 15 02:06:23 2024 00:31:37.790 read: IOPS=476, BW=1908KiB/s (1953kB/s)(18.6MiB/10006msec) 00:31:37.790 slat (usec): min=7, max=139, avg=30.76, stdev=21.32 00:31:37.790 clat (usec): min=10793, max=68051, avg=33381.57, stdev=6626.60 00:31:37.790 lat (usec): min=10804, max=68074, avg=33412.32, stdev=6624.15 00:31:37.790 clat percentiles (usec): 00:31:37.790 | 1.00th=[17433], 5.00th=[27132], 10.00th=[28967], 20.00th=[29754], 00:31:37.790 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31327], 60.00th=[31851], 00:31:37.790 | 70.00th=[33424], 80.00th=[38536], 90.00th=[42206], 95.00th=[45876], 00:31:37.790 | 99.00th=[55837], 99.50th=[61080], 99.90th=[67634], 99.95th=[67634], 00:31:37.790 | 99.99th=[67634] 00:31:37.790 bw ( KiB/s): min= 1504, max= 2176, per=4.11%, avg=1901.63, stdev=170.82, samples=19 00:31:37.790 iops : min= 376, max= 544, avg=475.37, stdev=42.73, samples=19 00:31:37.790 lat (msec) : 20=2.05%, 50=95.70%, 100=2.24% 00:31:37.790 cpu : usr=98.15%, sys=1.35%, ctx=18, majf=0, minf=41 00:31:37.790 IO depths : 1=0.3%, 2=0.9%, 4=10.5%, 8=73.5%, 16=14.8%, 32=0.0%, >=64=0.0% 00:31:37.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.790 complete : 0=0.0%, 4=91.6%, 8=5.1%, 16=3.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.790 issued rwts: total=4772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.790 filename0: (groupid=0, jobs=1): err= 0: pid=2302593: Mon Apr 15 02:06:23 2024 00:31:37.790 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10008msec) 00:31:37.790 slat (usec): min=7, max=142, avg=29.69, stdev=21.58 00:31:37.790 clat (usec): min=14952, max=57746, avg=32365.81, stdev=4805.97 00:31:37.790 lat (usec): min=14962, max=57795, avg=32395.50, stdev=4809.13 00:31:37.790 clat percentiles (usec): 00:31:37.790 | 1.00th=[20055], 5.00th=[27395], 10.00th=[28967], 20.00th=[29754], 00:31:37.790 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.790 | 70.00th=[32113], 80.00th=[36439], 90.00th=[39584], 95.00th=[40633], 00:31:37.790 | 99.00th=[48497], 99.50th=[50594], 99.90th=[54789], 99.95th=[55837], 00:31:37.790 | 99.99th=[57934] 00:31:37.790 bw ( KiB/s): min= 1660, max= 2304, per=4.24%, avg=1961.63, stdev=172.08, samples=19 00:31:37.790 iops : min= 415, max= 576, avg=490.37, stdev=43.00, samples=19 00:31:37.790 lat (msec) : 20=0.94%, 50=98.56%, 100=0.51% 00:31:37.790 cpu : usr=98.28%, sys=1.30%, ctx=18, majf=0, minf=27 00:31:37.790 IO depths : 1=1.6%, 2=3.5%, 4=13.6%, 8=68.8%, 16=12.6%, 32=0.0%, >=64=0.0% 00:31:37.790 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.790 complete : 0=0.0%, 4=92.1%, 8=3.8%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.790 issued rwts: total=4917,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.790 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.790 filename0: (groupid=0, jobs=1): err= 0: pid=2302594: Mon Apr 15 02:06:23 2024 00:31:37.790 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10008msec) 00:31:37.790 slat (usec): min=3, max=366, avg=32.02, stdev=19.90 00:31:37.790 clat (usec): min=9417, max=60905, avg=32466.08, stdev=5782.76 00:31:37.790 lat (usec): min=9426, max=60926, avg=32498.10, stdev=5783.95 00:31:37.790 clat percentiles (usec): 00:31:37.790 | 1.00th=[15664], 5.00th=[25822], 10.00th=[28967], 20.00th=[29754], 00:31:37.790 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31327], 60.00th=[31589], 00:31:37.790 | 70.00th=[32113], 80.00th=[37487], 90.00th=[39584], 95.00th=[42206], 00:31:37.790 | 99.00th=[51643], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:31:37.791 | 99.99th=[61080] 00:31:37.791 bw ( KiB/s): min= 1536, max= 2096, per=4.21%, avg=1948.58, stdev=148.78, samples=19 00:31:37.791 iops : min= 384, max= 524, avg=487.11, stdev=37.24, samples=19 00:31:37.791 lat (msec) : 10=0.16%, 20=2.21%, 50=96.41%, 100=1.23% 00:31:37.791 cpu : usr=96.16%, sys=1.99%, ctx=111, majf=0, minf=48 00:31:37.791 IO depths : 1=2.0%, 2=6.1%, 4=19.2%, 8=61.0%, 16=11.6%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=93.2%, 8=2.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename0: (groupid=0, jobs=1): err= 0: pid=2302595: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:31:37.791 slat (usec): min=7, max=135, avg=29.22, stdev=19.67 00:31:37.791 clat (usec): min=6513, max=76290, avg=33768.22, stdev=7369.35 00:31:37.791 lat (usec): min=6522, max=76298, avg=33797.43, stdev=7367.76 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[16909], 5.00th=[26870], 10.00th=[28967], 20.00th=[30016], 00:31:37.791 | 30.00th=[30278], 40.00th=[31065], 50.00th=[31327], 60.00th=[32113], 00:31:37.791 | 70.00th=[34341], 80.00th=[39060], 90.00th=[42730], 95.00th=[47973], 00:31:37.791 | 99.00th=[61080], 99.50th=[62653], 99.90th=[70779], 99.95th=[72877], 00:31:37.791 | 99.99th=[76022] 00:31:37.791 bw ( KiB/s): min= 1536, max= 2144, per=4.05%, avg=1873.42, stdev=185.76, samples=19 00:31:37.791 iops : min= 384, max= 536, avg=468.32, stdev=46.46, samples=19 00:31:37.791 lat (msec) : 10=0.25%, 20=2.12%, 50=93.67%, 100=3.96% 00:31:37.791 cpu : usr=95.58%, sys=2.34%, ctx=346, majf=0, minf=41 00:31:37.791 IO depths : 1=0.1%, 2=0.6%, 4=10.6%, 8=74.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=91.6%, 8=4.8%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename0: (groupid=0, jobs=1): err= 0: pid=2302596: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=491, BW=1965KiB/s (2013kB/s)(19.2MiB/10013msec) 00:31:37.791 slat (usec): min=7, max=401, avg=30.62, stdev=14.27 00:31:37.791 clat (usec): min=14071, max=63626, avg=32344.31, stdev=4592.47 00:31:37.791 lat (usec): min=14103, max=63663, avg=32374.93, stdev=4592.80 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[20841], 5.00th=[27395], 10.00th=[29230], 20.00th=[30016], 00:31:37.791 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.791 | 70.00th=[32113], 80.00th=[35390], 90.00th=[39060], 95.00th=[40109], 00:31:37.791 | 99.00th=[43254], 99.50th=[50594], 99.90th=[63701], 99.95th=[63701], 00:31:37.791 | 99.99th=[63701] 00:31:37.791 bw ( KiB/s): min= 1568, max= 2139, per=4.25%, avg=1965.50, stdev=154.05, samples=20 00:31:37.791 iops : min= 392, max= 534, avg=491.30, stdev=38.42, samples=20 00:31:37.791 lat (msec) : 20=0.65%, 50=98.84%, 100=0.51% 00:31:37.791 cpu : usr=89.17%, sys=4.61%, ctx=358, majf=0, minf=51 00:31:37.791 IO depths : 1=2.6%, 2=5.3%, 4=16.4%, 8=64.3%, 16=11.3%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=92.8%, 8=2.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4920,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename0: (groupid=0, jobs=1): err= 0: pid=2302597: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.9MiB/10031msec) 00:31:37.791 slat (usec): min=5, max=219, avg=28.40, stdev=17.06 00:31:37.791 clat (usec): min=6625, max=67713, avg=32982.00, stdev=6833.26 00:31:37.791 lat (usec): min=6633, max=67738, avg=33010.40, stdev=6834.57 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[12518], 5.00th=[22676], 10.00th=[28443], 20.00th=[29754], 00:31:37.791 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31327], 60.00th=[31851], 00:31:37.791 | 70.00th=[33817], 80.00th=[38536], 90.00th=[41157], 95.00th=[44303], 00:31:37.791 | 99.00th=[52691], 99.50th=[57410], 99.90th=[67634], 99.95th=[67634], 00:31:37.791 | 99.99th=[67634] 00:31:37.791 bw ( KiB/s): min= 1568, max= 2176, per=4.17%, avg=1927.95, stdev=164.11, samples=20 00:31:37.791 iops : min= 392, max= 544, avg=481.95, stdev=41.00, samples=20 00:31:37.791 lat (msec) : 10=0.50%, 20=3.10%, 50=94.52%, 100=1.88% 00:31:37.791 cpu : usr=96.84%, sys=1.83%, ctx=122, majf=0, minf=47 00:31:37.791 IO depths : 1=2.6%, 2=5.7%, 4=18.7%, 8=62.6%, 16=10.3%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=92.8%, 8=1.9%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4834,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename0: (groupid=0, jobs=1): err= 0: pid=2302598: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=483, BW=1934KiB/s (1981kB/s)(18.9MiB/10023msec) 00:31:37.791 slat (usec): min=5, max=157, avg=38.31, stdev=24.07 00:31:37.791 clat (usec): min=10163, max=70834, avg=32802.34, stdev=5798.23 00:31:37.791 lat (usec): min=10296, max=70853, avg=32840.65, stdev=5799.11 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[19530], 5.00th=[26346], 10.00th=[28967], 20.00th=[29754], 00:31:37.791 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.791 | 70.00th=[32637], 80.00th=[38011], 90.00th=[39584], 95.00th=[41681], 00:31:37.791 | 99.00th=[55313], 99.50th=[57934], 99.90th=[66323], 99.95th=[66323], 00:31:37.791 | 99.99th=[70779] 00:31:37.791 bw ( KiB/s): min= 1632, max= 2176, per=4.18%, avg=1935.55, stdev=156.78, samples=20 00:31:37.791 iops : min= 408, max= 544, avg=483.85, stdev=39.17, samples=20 00:31:37.791 lat (msec) : 20=1.42%, 50=97.01%, 100=1.57% 00:31:37.791 cpu : usr=98.39%, sys=1.19%, ctx=17, majf=0, minf=51 00:31:37.791 IO depths : 1=2.1%, 2=5.1%, 4=17.7%, 8=63.8%, 16=11.3%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4847,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename0: (groupid=0, jobs=1): err= 0: pid=2302600: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=488, BW=1956KiB/s (2002kB/s)(19.2MiB/10035msec) 00:31:37.791 slat (usec): min=4, max=133, avg=35.39, stdev=18.45 00:31:37.791 clat (usec): min=11032, max=58000, avg=32422.93, stdev=5367.23 00:31:37.791 lat (usec): min=11049, max=58026, avg=32458.32, stdev=5368.46 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[16450], 5.00th=[26346], 10.00th=[28967], 20.00th=[29754], 00:31:37.791 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.791 | 70.00th=[32375], 80.00th=[38011], 90.00th=[39584], 95.00th=[41157], 00:31:37.791 | 99.00th=[47449], 99.50th=[50594], 99.90th=[57934], 99.95th=[57934], 00:31:37.791 | 99.99th=[57934] 00:31:37.791 bw ( KiB/s): min= 1536, max= 2176, per=4.23%, avg=1957.55, stdev=149.87, samples=20 00:31:37.791 iops : min= 384, max= 544, avg=489.35, stdev=37.48, samples=20 00:31:37.791 lat (msec) : 20=2.63%, 50=96.76%, 100=0.61% 00:31:37.791 cpu : usr=95.66%, sys=2.17%, ctx=115, majf=0, minf=33 00:31:37.791 IO depths : 1=3.8%, 2=8.5%, 4=21.5%, 8=57.1%, 16=9.1%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=93.6%, 8=0.8%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename1: (groupid=0, jobs=1): err= 0: pid=2302601: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=473, BW=1894KiB/s (1940kB/s)(18.5MiB/10008msec) 00:31:37.791 slat (usec): min=7, max=160, avg=43.71, stdev=32.22 00:31:37.791 clat (msec): min=8, max=114, avg=33.54, stdev= 8.36 00:31:37.791 lat (msec): min=8, max=114, avg=33.59, stdev= 8.37 00:31:37.791 clat percentiles (msec): 00:31:37.791 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 31], 00:31:37.791 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 32], 00:31:37.791 | 70.00th=[ 34], 80.00th=[ 39], 90.00th=[ 41], 95.00th=[ 46], 00:31:37.791 | 99.00th=[ 68], 99.50th=[ 85], 99.90th=[ 114], 99.95th=[ 114], 00:31:37.791 | 99.99th=[ 114] 00:31:37.791 bw ( KiB/s): min= 1328, max= 2176, per=4.06%, avg=1879.58, stdev=205.68, samples=19 00:31:37.791 iops : min= 332, max= 544, avg=469.89, stdev=51.42, samples=19 00:31:37.791 lat (msec) : 10=0.17%, 20=1.84%, 50=95.61%, 100=2.05%, 250=0.34% 00:31:37.791 cpu : usr=98.46%, sys=1.10%, ctx=15, majf=0, minf=38 00:31:37.791 IO depths : 1=1.2%, 2=2.7%, 4=12.0%, 8=70.2%, 16=13.9%, 32=0.0%, >=64=0.0% 00:31:37.791 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 complete : 0=0.0%, 4=91.7%, 8=5.1%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.791 issued rwts: total=4740,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.791 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.791 filename1: (groupid=0, jobs=1): err= 0: pid=2302602: Mon Apr 15 02:06:23 2024 00:31:37.791 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10023msec) 00:31:37.791 slat (usec): min=7, max=1382, avg=47.97, stdev=41.63 00:31:37.791 clat (usec): min=8530, max=64100, avg=32900.50, stdev=6305.65 00:31:37.791 lat (usec): min=8540, max=64114, avg=32948.47, stdev=6310.14 00:31:37.791 clat percentiles (usec): 00:31:37.791 | 1.00th=[18744], 5.00th=[25560], 10.00th=[28443], 20.00th=[29754], 00:31:37.791 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31851], 00:31:37.791 | 70.00th=[32637], 80.00th=[38011], 90.00th=[40633], 95.00th=[44303], 00:31:37.791 | 99.00th=[57410], 99.50th=[60556], 99.90th=[64226], 99.95th=[64226], 00:31:37.791 | 99.99th=[64226] 00:31:37.791 bw ( KiB/s): min= 1504, max= 2096, per=4.17%, avg=1928.35, stdev=163.61, samples=20 00:31:37.791 iops : min= 376, max= 524, avg=482.05, stdev=40.87, samples=20 00:31:37.791 lat (msec) : 10=0.17%, 20=1.96%, 50=95.35%, 100=2.52% 00:31:37.791 cpu : usr=98.07%, sys=1.27%, ctx=28, majf=0, minf=50 00:31:37.792 IO depths : 1=0.4%, 2=1.1%, 4=11.0%, 8=73.1%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=91.8%, 8=4.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4835,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302603: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=495, BW=1984KiB/s (2031kB/s)(19.4MiB/10027msec) 00:31:37.792 slat (nsec): min=6495, max=87581, avg=26243.40, stdev=12856.67 00:31:37.792 clat (usec): min=9314, max=71249, avg=32055.92, stdev=5186.51 00:31:37.792 lat (usec): min=9323, max=71298, avg=32082.17, stdev=5188.43 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[16581], 5.00th=[24773], 10.00th=[28705], 20.00th=[29492], 00:31:37.792 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[31589], 00:31:37.792 | 70.00th=[32113], 80.00th=[35914], 90.00th=[39584], 95.00th=[40109], 00:31:37.792 | 99.00th=[47449], 99.50th=[51119], 99.90th=[59507], 99.95th=[61080], 00:31:37.792 | 99.99th=[70779] 00:31:37.792 bw ( KiB/s): min= 1536, max= 2144, per=4.29%, avg=1982.80, stdev=146.90, samples=20 00:31:37.792 iops : min= 384, max= 536, avg=495.70, stdev=36.72, samples=20 00:31:37.792 lat (msec) : 10=0.14%, 20=2.33%, 50=97.02%, 100=0.50% 00:31:37.792 cpu : usr=97.76%, sys=1.54%, ctx=119, majf=0, minf=59 00:31:37.792 IO depths : 1=2.9%, 2=6.3%, 4=19.3%, 8=61.3%, 16=10.2%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=93.5%, 8=1.1%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4973,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302604: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=479, BW=1917KiB/s (1963kB/s)(18.8MiB/10023msec) 00:31:37.792 slat (usec): min=7, max=112, avg=29.89, stdev=17.43 00:31:37.792 clat (usec): min=12676, max=71561, avg=33197.76, stdev=6309.22 00:31:37.792 lat (usec): min=12714, max=71581, avg=33227.65, stdev=6309.14 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[17171], 5.00th=[24773], 10.00th=[28967], 20.00th=[30016], 00:31:37.792 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31327], 60.00th=[31851], 00:31:37.792 | 70.00th=[34341], 80.00th=[38536], 90.00th=[40109], 95.00th=[43779], 00:31:37.792 | 99.00th=[54264], 99.50th=[60556], 99.90th=[66847], 99.95th=[66847], 00:31:37.792 | 99.99th=[71828] 00:31:37.792 bw ( KiB/s): min= 1616, max= 2304, per=4.14%, avg=1916.35, stdev=181.56, samples=20 00:31:37.792 iops : min= 404, max= 576, avg=479.05, stdev=45.36, samples=20 00:31:37.792 lat (msec) : 20=2.02%, 50=95.94%, 100=2.04% 00:31:37.792 cpu : usr=96.43%, sys=2.37%, ctx=297, majf=0, minf=49 00:31:37.792 IO depths : 1=0.8%, 2=1.8%, 4=9.1%, 8=75.2%, 16=13.0%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4803,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302605: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=473, BW=1895KiB/s (1941kB/s)(18.5MiB/10007msec) 00:31:37.792 slat (usec): min=7, max=111, avg=27.05, stdev=16.99 00:31:37.792 clat (usec): min=8142, max=65272, avg=33632.46, stdev=6279.31 00:31:37.792 lat (usec): min=8152, max=65282, avg=33659.51, stdev=6279.39 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[17695], 5.00th=[27395], 10.00th=[29230], 20.00th=[30016], 00:31:37.792 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31327], 60.00th=[32113], 00:31:37.792 | 70.00th=[35914], 80.00th=[39060], 90.00th=[41157], 95.00th=[45351], 00:31:37.792 | 99.00th=[55837], 99.50th=[57934], 99.90th=[58983], 99.95th=[58983], 00:31:37.792 | 99.99th=[65274] 00:31:37.792 bw ( KiB/s): min= 1536, max= 2056, per=4.08%, avg=1889.84, stdev=149.66, samples=19 00:31:37.792 iops : min= 384, max= 514, avg=472.42, stdev=37.44, samples=19 00:31:37.792 lat (msec) : 10=0.08%, 20=1.81%, 50=95.93%, 100=2.17% 00:31:37.792 cpu : usr=98.12%, sys=1.43%, ctx=17, majf=0, minf=40 00:31:37.792 IO depths : 1=0.2%, 2=0.7%, 4=9.4%, 8=74.5%, 16=15.1%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=91.4%, 8=5.5%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4741,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302606: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=495, BW=1981KiB/s (2028kB/s)(19.4MiB/10032msec) 00:31:37.792 slat (usec): min=5, max=885, avg=25.81, stdev=33.02 00:31:37.792 clat (usec): min=6896, max=57373, avg=32103.84, stdev=5639.59 00:31:37.792 lat (usec): min=6908, max=57382, avg=32129.65, stdev=5639.13 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[15664], 5.00th=[23462], 10.00th=[28181], 20.00th=[29754], 00:31:37.792 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31327], 60.00th=[31589], 00:31:37.792 | 70.00th=[32375], 80.00th=[35914], 90.00th=[39584], 95.00th=[41157], 00:31:37.792 | 99.00th=[49546], 99.50th=[52167], 99.90th=[55837], 99.95th=[57410], 00:31:37.792 | 99.99th=[57410] 00:31:37.792 bw ( KiB/s): min= 1552, max= 2176, per=4.28%, avg=1980.35, stdev=157.39, samples=20 00:31:37.792 iops : min= 388, max= 544, avg=495.05, stdev=39.31, samples=20 00:31:37.792 lat (msec) : 10=0.58%, 20=2.07%, 50=96.42%, 100=0.93% 00:31:37.792 cpu : usr=91.71%, sys=3.68%, ctx=93, majf=0, minf=73 00:31:37.792 IO depths : 1=2.6%, 2=6.6%, 4=18.9%, 8=60.9%, 16=10.9%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=93.1%, 8=2.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302608: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=492, BW=1970KiB/s (2018kB/s)(19.2MiB/10005msec) 00:31:37.792 slat (usec): min=6, max=137, avg=35.18, stdev=18.61 00:31:37.792 clat (usec): min=12101, max=54663, avg=32216.37, stdev=4827.76 00:31:37.792 lat (usec): min=12146, max=54742, avg=32251.55, stdev=4829.32 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[17695], 5.00th=[27919], 10.00th=[28967], 20.00th=[29754], 00:31:37.792 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[31327], 00:31:37.792 | 70.00th=[31851], 80.00th=[35390], 90.00th=[39584], 95.00th=[40109], 00:31:37.792 | 99.00th=[48497], 99.50th=[50594], 99.90th=[53216], 99.95th=[53740], 00:31:37.792 | 99.99th=[54789] 00:31:37.792 bw ( KiB/s): min= 1660, max= 2176, per=4.25%, avg=1966.26, stdev=173.83, samples=19 00:31:37.792 iops : min= 415, max= 544, avg=491.53, stdev=43.44, samples=19 00:31:37.792 lat (msec) : 20=1.34%, 50=98.07%, 100=0.59% 00:31:37.792 cpu : usr=98.39%, sys=1.15%, ctx=18, majf=0, minf=56 00:31:37.792 IO depths : 1=2.9%, 2=7.1%, 4=22.0%, 8=57.8%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=94.1%, 8=0.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename1: (groupid=0, jobs=1): err= 0: pid=2302609: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=462, BW=1850KiB/s (1894kB/s)(18.1MiB/10008msec) 00:31:37.792 slat (usec): min=7, max=156, avg=44.17, stdev=29.29 00:31:37.792 clat (usec): min=9985, max=69378, avg=34310.16, stdev=6043.37 00:31:37.792 lat (usec): min=10023, max=69396, avg=34354.33, stdev=6039.42 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[18220], 5.00th=[28443], 10.00th=[29230], 20.00th=[30278], 00:31:37.792 | 30.00th=[30802], 40.00th=[31327], 50.00th=[31851], 60.00th=[33817], 00:31:37.792 | 70.00th=[38011], 80.00th=[39584], 90.00th=[41681], 95.00th=[44303], 00:31:37.792 | 99.00th=[52691], 99.50th=[53740], 99.90th=[58459], 99.95th=[58459], 00:31:37.792 | 99.99th=[69731] 00:31:37.792 bw ( KiB/s): min= 1552, max= 2048, per=3.97%, avg=1835.32, stdev=140.12, samples=19 00:31:37.792 iops : min= 388, max= 512, avg=458.79, stdev=34.97, samples=19 00:31:37.792 lat (msec) : 10=0.02%, 20=1.38%, 50=97.02%, 100=1.58% 00:31:37.792 cpu : usr=98.41%, sys=1.13%, ctx=19, majf=0, minf=47 00:31:37.792 IO depths : 1=0.8%, 2=4.2%, 4=18.9%, 8=63.4%, 16=12.7%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=93.3%, 8=1.8%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4628,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename2: (groupid=0, jobs=1): err= 0: pid=2302610: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=479, BW=1916KiB/s (1962kB/s)(18.7MiB/10006msec) 00:31:37.792 slat (usec): min=7, max=149, avg=33.49, stdev=25.97 00:31:37.792 clat (usec): min=8037, max=65486, avg=33209.64, stdev=6763.08 00:31:37.792 lat (usec): min=8047, max=65514, avg=33243.13, stdev=6761.03 00:31:37.792 clat percentiles (usec): 00:31:37.792 | 1.00th=[15270], 5.00th=[26870], 10.00th=[28967], 20.00th=[29754], 00:31:37.792 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.792 | 70.00th=[33162], 80.00th=[38011], 90.00th=[41157], 95.00th=[46400], 00:31:37.792 | 99.00th=[57410], 99.50th=[62129], 99.90th=[65274], 99.95th=[65274], 00:31:37.792 | 99.99th=[65274] 00:31:37.792 bw ( KiB/s): min= 1536, max= 2144, per=4.13%, avg=1908.26, stdev=182.94, samples=19 00:31:37.792 iops : min= 384, max= 536, avg=476.95, stdev=45.72, samples=19 00:31:37.792 lat (msec) : 10=0.23%, 20=2.29%, 50=94.72%, 100=2.75% 00:31:37.792 cpu : usr=98.21%, sys=1.31%, ctx=24, majf=0, minf=34 00:31:37.792 IO depths : 1=0.3%, 2=1.3%, 4=12.0%, 8=72.0%, 16=14.4%, 32=0.0%, >=64=0.0% 00:31:37.792 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 complete : 0=0.0%, 4=92.1%, 8=4.2%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.792 issued rwts: total=4794,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.792 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.792 filename2: (groupid=0, jobs=1): err= 0: pid=2302611: Mon Apr 15 02:06:23 2024 00:31:37.792 read: IOPS=491, BW=1965KiB/s (2012kB/s)(19.2MiB/10026msec) 00:31:37.792 slat (usec): min=3, max=158, avg=48.86, stdev=32.18 00:31:37.792 clat (usec): min=12031, max=59037, avg=32238.08, stdev=5751.46 00:31:37.793 lat (usec): min=12124, max=59079, avg=32286.94, stdev=5747.69 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[16057], 5.00th=[22414], 10.00th=[27919], 20.00th=[29754], 00:31:37.793 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[31589], 00:31:37.793 | 70.00th=[32375], 80.00th=[37487], 90.00th=[40109], 95.00th=[41681], 00:31:37.793 | 99.00th=[46924], 99.50th=[51119], 99.90th=[57410], 99.95th=[57934], 00:31:37.793 | 99.99th=[58983] 00:31:37.793 bw ( KiB/s): min= 1536, max= 2192, per=4.24%, avg=1963.40, stdev=160.85, samples=20 00:31:37.793 iops : min= 384, max= 548, avg=490.85, stdev=40.21, samples=20 00:31:37.793 lat (msec) : 20=3.53%, 50=95.88%, 100=0.59% 00:31:37.793 cpu : usr=97.35%, sys=1.41%, ctx=66, majf=0, minf=39 00:31:37.793 IO depths : 1=3.4%, 2=7.7%, 4=19.8%, 8=59.1%, 16=10.1%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=93.6%, 8=1.3%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4925,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302612: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=493, BW=1973KiB/s (2020kB/s)(19.3MiB/10023msec) 00:31:37.793 slat (usec): min=7, max=166, avg=47.67, stdev=35.23 00:31:37.793 clat (usec): min=9135, max=70592, avg=32124.00, stdev=6136.61 00:31:37.793 lat (usec): min=9151, max=70688, avg=32171.68, stdev=6143.72 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[13960], 5.00th=[24773], 10.00th=[28443], 20.00th=[29492], 00:31:37.793 | 30.00th=[30278], 40.00th=[30540], 50.00th=[31065], 60.00th=[31327], 00:31:37.793 | 70.00th=[31851], 80.00th=[34866], 90.00th=[39060], 95.00th=[41157], 00:31:37.793 | 99.00th=[54789], 99.50th=[62129], 99.90th=[65274], 99.95th=[67634], 00:31:37.793 | 99.99th=[70779] 00:31:37.793 bw ( KiB/s): min= 1564, max= 2144, per=4.26%, avg=1971.95, stdev=158.73, samples=20 00:31:37.793 iops : min= 391, max= 536, avg=492.95, stdev=39.67, samples=20 00:31:37.793 lat (msec) : 10=0.08%, 20=2.95%, 50=95.08%, 100=1.88% 00:31:37.793 cpu : usr=98.13%, sys=1.37%, ctx=23, majf=0, minf=54 00:31:37.793 IO depths : 1=1.4%, 2=3.5%, 4=16.0%, 8=66.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=93.0%, 8=2.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302613: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=494, BW=1978KiB/s (2026kB/s)(19.4MiB/10026msec) 00:31:37.793 slat (usec): min=5, max=107, avg=26.21, stdev=14.79 00:31:37.793 clat (usec): min=7277, max=60342, avg=32130.24, stdev=5233.28 00:31:37.793 lat (usec): min=7292, max=60351, avg=32156.45, stdev=5235.13 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[16450], 5.00th=[26608], 10.00th=[28967], 20.00th=[29754], 00:31:37.793 | 30.00th=[30278], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.793 | 70.00th=[31851], 80.00th=[33817], 90.00th=[39060], 95.00th=[40109], 00:31:37.793 | 99.00th=[53216], 99.50th=[54789], 99.90th=[57934], 99.95th=[58983], 00:31:37.793 | 99.99th=[60556] 00:31:37.793 bw ( KiB/s): min= 1536, max= 2176, per=4.27%, avg=1977.35, stdev=163.81, samples=20 00:31:37.793 iops : min= 384, max= 544, avg=494.30, stdev=41.00, samples=20 00:31:37.793 lat (msec) : 10=0.14%, 20=1.21%, 50=97.36%, 100=1.29% 00:31:37.793 cpu : usr=98.48%, sys=1.11%, ctx=18, majf=0, minf=47 00:31:37.793 IO depths : 1=2.8%, 2=6.4%, 4=18.8%, 8=61.4%, 16=10.7%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=93.1%, 8=2.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302614: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10005msec) 00:31:37.793 slat (usec): min=4, max=456, avg=29.57, stdev=17.32 00:31:37.793 clat (usec): min=12283, max=61076, avg=33699.85, stdev=6234.12 00:31:37.793 lat (usec): min=12306, max=61106, avg=33729.43, stdev=6234.01 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[16057], 5.00th=[26084], 10.00th=[29230], 20.00th=[30016], 00:31:37.793 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31589], 60.00th=[32375], 00:31:37.793 | 70.00th=[36439], 80.00th=[39060], 90.00th=[41157], 95.00th=[44303], 00:31:37.793 | 99.00th=[52167], 99.50th=[53740], 99.90th=[60556], 99.95th=[61080], 00:31:37.793 | 99.99th=[61080] 00:31:37.793 bw ( KiB/s): min= 1632, max= 2080, per=4.07%, avg=1883.42, stdev=146.75, samples=19 00:31:37.793 iops : min= 408, max= 520, avg=470.84, stdev=36.71, samples=19 00:31:37.793 lat (msec) : 20=2.18%, 50=95.74%, 100=2.08% 00:31:37.793 cpu : usr=89.76%, sys=4.34%, ctx=265, majf=0, minf=44 00:31:37.793 IO depths : 1=3.3%, 2=7.6%, 4=20.7%, 8=58.7%, 16=9.7%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302615: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=476, BW=1906KiB/s (1952kB/s)(18.6MiB/10006msec) 00:31:37.793 slat (usec): min=7, max=803, avg=35.28, stdev=30.03 00:31:37.793 clat (usec): min=10583, max=73650, avg=33398.93, stdev=6388.15 00:31:37.793 lat (usec): min=10600, max=73688, avg=33434.22, stdev=6391.39 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[17695], 5.00th=[27657], 10.00th=[28705], 20.00th=[29754], 00:31:37.793 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31327], 60.00th=[31851], 00:31:37.793 | 70.00th=[33817], 80.00th=[38536], 90.00th=[41681], 95.00th=[45876], 00:31:37.793 | 99.00th=[55313], 99.50th=[55837], 99.90th=[64226], 99.95th=[72877], 00:31:37.793 | 99.99th=[73925] 00:31:37.793 bw ( KiB/s): min= 1552, max= 2104, per=4.09%, avg=1893.84, stdev=159.08, samples=19 00:31:37.793 iops : min= 388, max= 526, avg=473.42, stdev=39.73, samples=19 00:31:37.793 lat (msec) : 20=1.87%, 50=95.53%, 100=2.60% 00:31:37.793 cpu : usr=93.14%, sys=3.21%, ctx=213, majf=0, minf=41 00:31:37.793 IO depths : 1=0.1%, 2=0.8%, 4=9.4%, 8=74.6%, 16=15.1%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=91.4%, 8=5.5%, 16=3.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4768,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302616: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=465, BW=1862KiB/s (1906kB/s)(18.2MiB/10013msec) 00:31:37.793 slat (usec): min=7, max=159, avg=41.86, stdev=30.96 00:31:37.793 clat (usec): min=8759, max=78907, avg=34107.29, stdev=7018.99 00:31:37.793 lat (usec): min=8782, max=78977, avg=34149.14, stdev=7020.46 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[16909], 5.00th=[27919], 10.00th=[28967], 20.00th=[30016], 00:31:37.793 | 30.00th=[30540], 40.00th=[31065], 50.00th=[31589], 60.00th=[32375], 00:31:37.793 | 70.00th=[35914], 80.00th=[39584], 90.00th=[43254], 95.00th=[46924], 00:31:37.793 | 99.00th=[62653], 99.50th=[63177], 99.90th=[70779], 99.95th=[70779], 00:31:37.793 | 99.99th=[79168] 00:31:37.793 bw ( KiB/s): min= 1432, max= 2048, per=4.02%, avg=1859.50, stdev=163.49, samples=20 00:31:37.793 iops : min= 358, max= 512, avg=464.80, stdev=40.80, samples=20 00:31:37.793 lat (msec) : 10=0.09%, 20=1.93%, 50=95.00%, 100=2.98% 00:31:37.793 cpu : usr=98.30%, sys=1.29%, ctx=16, majf=0, minf=49 00:31:37.793 IO depths : 1=1.1%, 2=3.0%, 4=14.8%, 8=67.9%, 16=13.2%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 complete : 0=0.0%, 4=92.4%, 8=3.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.793 issued rwts: total=4660,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.793 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.793 filename2: (groupid=0, jobs=1): err= 0: pid=2302617: Mon Apr 15 02:06:23 2024 00:31:37.793 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10010msec) 00:31:37.793 slat (usec): min=5, max=111, avg=30.64, stdev=15.24 00:31:37.793 clat (usec): min=9427, max=77800, avg=32896.55, stdev=5640.23 00:31:37.793 lat (usec): min=9451, max=77830, avg=32927.20, stdev=5639.90 00:31:37.793 clat percentiles (usec): 00:31:37.793 | 1.00th=[17957], 5.00th=[27919], 10.00th=[29230], 20.00th=[30016], 00:31:37.793 | 30.00th=[30540], 40.00th=[30802], 50.00th=[31065], 60.00th=[31589], 00:31:37.793 | 70.00th=[32637], 80.00th=[38011], 90.00th=[40109], 95.00th=[41681], 00:31:37.793 | 99.00th=[52691], 99.50th=[58983], 99.90th=[66323], 99.95th=[78119], 00:31:37.793 | 99.99th=[78119] 00:31:37.793 bw ( KiB/s): min= 1536, max= 2160, per=4.16%, avg=1924.79, stdev=173.12, samples=19 00:31:37.793 iops : min= 384, max= 540, avg=481.16, stdev=43.25, samples=19 00:31:37.793 lat (msec) : 10=0.02%, 20=1.30%, 50=97.23%, 100=1.45% 00:31:37.793 cpu : usr=98.23%, sys=1.14%, ctx=67, majf=0, minf=54 00:31:37.793 IO depths : 1=2.1%, 2=5.3%, 4=17.8%, 8=63.0%, 16=11.7%, 32=0.0%, >=64=0.0% 00:31:37.793 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.794 complete : 0=0.0%, 4=92.9%, 8=2.5%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:37.794 issued rwts: total=4836,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:37.794 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:37.794 00:31:37.794 Run status group 0 (all jobs): 00:31:37.794 READ: bw=45.2MiB/s (47.4MB/s), 1850KiB/s-1984KiB/s (1894kB/s-2031kB/s), io=453MiB (475MB), run=10005-10035msec 00:31:38.053 02:06:23 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:38.053 02:06:23 -- target/dif.sh@43 -- # local sub 00:31:38.053 02:06:23 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.053 02:06:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:38.053 02:06:23 -- target/dif.sh@36 -- # local sub_id=0 00:31:38.053 02:06:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.053 02:06:23 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:38.053 02:06:23 -- target/dif.sh@36 -- # local sub_id=1 00:31:38.053 02:06:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@45 -- # for sub in "$@" 00:31:38.053 02:06:23 -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:38.053 02:06:23 -- target/dif.sh@36 -- # local sub_id=2 00:31:38.053 02:06:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:38.053 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.053 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.053 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # NULL_DIF=1 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # numjobs=2 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # iodepth=8 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # runtime=5 00:31:38.053 02:06:23 -- target/dif.sh@115 -- # files=1 00:31:38.053 02:06:23 -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:38.053 02:06:23 -- target/dif.sh@28 -- # local sub 00:31:38.053 02:06:23 -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.312 02:06:23 -- target/dif.sh@31 -- # create_subsystem 0 00:31:38.312 02:06:23 -- target/dif.sh@18 -- # local sub_id=0 00:31:38.312 02:06:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 bdev_null0 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 [2024-04-15 02:06:23.727204] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@30 -- # for sub in "$@" 00:31:38.312 02:06:23 -- target/dif.sh@31 -- # create_subsystem 1 00:31:38.312 02:06:23 -- target/dif.sh@18 -- # local sub_id=1 00:31:38.312 02:06:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 bdev_null1 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:38.312 02:06:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:38.312 02:06:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.312 02:06:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:38.312 02:06:23 -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:38.312 02:06:23 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:38.312 02:06:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:38.312 02:06:23 -- nvmf/common.sh@520 -- # config=() 00:31:38.312 02:06:23 -- nvmf/common.sh@520 -- # local subsystem config 00:31:38.312 02:06:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.312 02:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:38.312 02:06:23 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.312 02:06:23 -- target/dif.sh@82 -- # gen_fio_conf 00:31:38.312 02:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:38.312 { 00:31:38.312 "params": { 00:31:38.312 "name": "Nvme$subsystem", 00:31:38.312 "trtype": "$TEST_TRANSPORT", 00:31:38.312 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.312 "adrfam": "ipv4", 00:31:38.312 "trsvcid": "$NVMF_PORT", 00:31:38.312 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.312 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.312 "hdgst": ${hdgst:-false}, 00:31:38.312 "ddgst": ${ddgst:-false} 00:31:38.312 }, 00:31:38.312 "method": "bdev_nvme_attach_controller" 00:31:38.312 } 00:31:38.312 EOF 00:31:38.312 )") 00:31:38.313 02:06:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:38.313 02:06:23 -- target/dif.sh@54 -- # local file 00:31:38.313 02:06:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:38.313 02:06:23 -- target/dif.sh@56 -- # cat 00:31:38.313 02:06:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:38.313 02:06:23 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.313 02:06:23 -- common/autotest_common.sh@1320 -- # shift 00:31:38.313 02:06:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:38.313 02:06:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.313 02:06:23 -- nvmf/common.sh@542 -- # cat 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.313 02:06:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:38.313 02:06:23 -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.313 02:06:23 -- target/dif.sh@73 -- # cat 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:38.313 02:06:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:38.313 02:06:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:38.313 { 00:31:38.313 "params": { 00:31:38.313 "name": "Nvme$subsystem", 00:31:38.313 "trtype": "$TEST_TRANSPORT", 00:31:38.313 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:38.313 "adrfam": "ipv4", 00:31:38.313 "trsvcid": "$NVMF_PORT", 00:31:38.313 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:38.313 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:38.313 "hdgst": ${hdgst:-false}, 00:31:38.313 "ddgst": ${ddgst:-false} 00:31:38.313 }, 00:31:38.313 "method": "bdev_nvme_attach_controller" 00:31:38.313 } 00:31:38.313 EOF 00:31:38.313 )") 00:31:38.313 02:06:23 -- target/dif.sh@72 -- # (( file++ )) 00:31:38.313 02:06:23 -- target/dif.sh@72 -- # (( file <= files )) 00:31:38.313 02:06:23 -- nvmf/common.sh@542 -- # cat 00:31:38.313 02:06:23 -- nvmf/common.sh@544 -- # jq . 00:31:38.313 02:06:23 -- nvmf/common.sh@545 -- # IFS=, 00:31:38.313 02:06:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:38.313 "params": { 00:31:38.313 "name": "Nvme0", 00:31:38.313 "trtype": "tcp", 00:31:38.313 "traddr": "10.0.0.2", 00:31:38.313 "adrfam": "ipv4", 00:31:38.313 "trsvcid": "4420", 00:31:38.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:38.313 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:38.313 "hdgst": false, 00:31:38.313 "ddgst": false 00:31:38.313 }, 00:31:38.313 "method": "bdev_nvme_attach_controller" 00:31:38.313 },{ 00:31:38.313 "params": { 00:31:38.313 "name": "Nvme1", 00:31:38.313 "trtype": "tcp", 00:31:38.313 "traddr": "10.0.0.2", 00:31:38.313 "adrfam": "ipv4", 00:31:38.313 "trsvcid": "4420", 00:31:38.313 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:38.313 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:38.313 "hdgst": false, 00:31:38.313 "ddgst": false 00:31:38.313 }, 00:31:38.313 "method": "bdev_nvme_attach_controller" 00:31:38.313 }' 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:38.313 02:06:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:38.313 02:06:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:38.313 02:06:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:38.313 02:06:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:38.313 02:06:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:38.313 02:06:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:38.571 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.571 ... 00:31:38.571 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:38.571 ... 00:31:38.571 fio-3.35 00:31:38.571 Starting 4 threads 00:31:38.571 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.137 [2024-04-15 02:06:24.755164] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:39.137 [2024-04-15 02:06:24.755240] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:44.408 00:31:44.408 filename0: (groupid=0, jobs=1): err= 0: pid=2303909: Mon Apr 15 02:06:29 2024 00:31:44.408 read: IOPS=1631, BW=12.7MiB/s (13.4MB/s)(63.8MiB/5002msec) 00:31:44.408 slat (nsec): min=3939, max=57025, avg=14171.51, stdev=7158.27 00:31:44.408 clat (usec): min=2619, max=10760, avg=4858.57, stdev=889.29 00:31:44.408 lat (usec): min=2641, max=10779, avg=4872.74, stdev=889.32 00:31:44.408 clat percentiles (usec): 00:31:44.408 | 1.00th=[ 3523], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4293], 00:31:44.408 | 30.00th=[ 4424], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:31:44.408 | 70.00th=[ 4817], 80.00th=[ 5211], 90.00th=[ 6325], 95.00th=[ 6915], 00:31:44.408 | 99.00th=[ 7767], 99.50th=[ 8029], 99.90th=[ 9503], 99.95th=[10028], 00:31:44.408 | 99.99th=[10814] 00:31:44.408 bw ( KiB/s): min=12624, max=13664, per=24.07%, avg=13054.40, stdev=370.77, samples=10 00:31:44.408 iops : min= 1578, max= 1708, avg=1631.80, stdev=46.35, samples=10 00:31:44.408 lat (msec) : 4=6.80%, 10=93.14%, 20=0.06% 00:31:44.408 cpu : usr=94.92%, sys=4.60%, ctx=9, majf=0, minf=11 00:31:44.408 IO depths : 1=0.1%, 2=0.3%, 4=73.1%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.408 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.408 issued rwts: total=8162,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.408 filename0: (groupid=0, jobs=1): err= 0: pid=2303910: Mon Apr 15 02:06:29 2024 00:31:44.408 read: IOPS=1714, BW=13.4MiB/s (14.0MB/s)(67.0MiB/5002msec) 00:31:44.408 slat (nsec): min=5085, max=63460, avg=12262.86, stdev=5891.80 00:31:44.408 clat (usec): min=1975, max=48216, avg=4628.38, stdev=1498.21 00:31:44.408 lat (usec): min=1982, max=48231, avg=4640.64, stdev=1497.90 00:31:44.408 clat percentiles (usec): 00:31:44.408 | 1.00th=[ 3163], 5.00th=[ 3556], 10.00th=[ 3884], 20.00th=[ 4178], 00:31:44.408 | 30.00th=[ 4359], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4621], 00:31:44.408 | 70.00th=[ 4752], 80.00th=[ 4817], 90.00th=[ 5342], 95.00th=[ 6063], 00:31:44.408 | 99.00th=[ 6915], 99.50th=[ 7177], 99.90th=[ 7635], 99.95th=[47973], 00:31:44.408 | 99.99th=[47973] 00:31:44.408 bw ( KiB/s): min=12857, max=14096, per=25.28%, avg=13711.30, stdev=447.95, samples=10 00:31:44.408 iops : min= 1607, max= 1762, avg=1713.90, stdev=56.02, samples=10 00:31:44.408 lat (msec) : 2=0.05%, 4=13.72%, 10=86.14%, 50=0.09% 00:31:44.408 cpu : usr=94.02%, sys=5.46%, ctx=11, majf=0, minf=9 00:31:44.408 IO depths : 1=0.3%, 2=2.1%, 4=70.5%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.408 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.408 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.408 issued rwts: total=8576,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.408 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.408 filename1: (groupid=0, jobs=1): err= 0: pid=2303911: Mon Apr 15 02:06:29 2024 00:31:44.408 read: IOPS=1743, BW=13.6MiB/s (14.3MB/s)(68.2MiB/5003msec) 00:31:44.408 slat (nsec): min=5004, max=72949, avg=12639.34, stdev=6274.55 00:31:44.408 clat (usec): min=2495, max=8332, avg=4549.26, stdev=639.58 00:31:44.408 lat (usec): min=2508, max=8344, avg=4561.90, stdev=639.47 00:31:44.408 clat percentiles (usec): 00:31:44.408 | 1.00th=[ 3064], 5.00th=[ 3556], 10.00th=[ 3851], 20.00th=[ 4146], 00:31:44.408 | 30.00th=[ 4293], 40.00th=[ 4424], 50.00th=[ 4555], 60.00th=[ 4621], 00:31:44.408 | 70.00th=[ 4686], 80.00th=[ 4817], 90.00th=[ 5276], 95.00th=[ 5735], 00:31:44.408 | 99.00th=[ 6783], 99.50th=[ 6915], 99.90th=[ 7373], 99.95th=[ 7963], 00:31:44.408 | 99.99th=[ 8356] 00:31:44.409 bw ( KiB/s): min=13120, max=14832, per=25.79%, avg=13984.00, stdev=479.87, samples=9 00:31:44.409 iops : min= 1640, max= 1854, avg=1748.00, stdev=59.98, samples=9 00:31:44.409 lat (msec) : 4=14.28%, 10=85.72% 00:31:44.409 cpu : usr=94.26%, sys=5.24%, ctx=6, majf=0, minf=0 00:31:44.409 IO depths : 1=0.1%, 2=1.4%, 4=69.9%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.409 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.409 issued rwts: total=8725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.409 filename1: (groupid=0, jobs=1): err= 0: pid=2303912: Mon Apr 15 02:06:29 2024 00:31:44.409 read: IOPS=1688, BW=13.2MiB/s (13.8MB/s)(66.0MiB/5003msec) 00:31:44.409 slat (nsec): min=5160, max=72955, avg=12369.61, stdev=6347.02 00:31:44.409 clat (usec): min=2501, max=8759, avg=4700.04, stdev=561.16 00:31:44.409 lat (usec): min=2512, max=8773, avg=4712.41, stdev=561.19 00:31:44.409 clat percentiles (usec): 00:31:44.409 | 1.00th=[ 3425], 5.00th=[ 3916], 10.00th=[ 4113], 20.00th=[ 4359], 00:31:44.409 | 30.00th=[ 4490], 40.00th=[ 4555], 50.00th=[ 4621], 60.00th=[ 4686], 00:31:44.409 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5473], 95.00th=[ 5800], 00:31:44.409 | 99.00th=[ 6587], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7373], 00:31:44.409 | 99.99th=[ 8717] 00:31:44.409 bw ( KiB/s): min=12928, max=13915, per=24.91%, avg=13509.90, stdev=302.19, samples=10 00:31:44.409 iops : min= 1616, max= 1739, avg=1688.70, stdev=37.72, samples=10 00:31:44.409 lat (msec) : 4=6.62%, 10=93.38% 00:31:44.409 cpu : usr=94.16%, sys=5.32%, ctx=7, majf=0, minf=9 00:31:44.409 IO depths : 1=0.1%, 2=1.4%, 4=69.6%, 8=28.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:44.409 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.409 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.409 issued rwts: total=8450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.409 latency : target=0, window=0, percentile=100.00%, depth=8 00:31:44.409 00:31:44.409 Run status group 0 (all jobs): 00:31:44.409 READ: bw=53.0MiB/s (55.5MB/s), 12.7MiB/s-13.6MiB/s (13.4MB/s-14.3MB/s), io=265MiB (278MB), run=5002-5003msec 00:31:44.666 02:06:30 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:31:44.666 02:06:30 -- target/dif.sh@43 -- # local sub 00:31:44.666 02:06:30 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.666 02:06:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:44.666 02:06:30 -- target/dif.sh@36 -- # local sub_id=0 00:31:44.666 02:06:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:44.666 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.666 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@45 -- # for sub in "$@" 00:31:44.667 02:06:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:44.667 02:06:30 -- target/dif.sh@36 -- # local sub_id=1 00:31:44.667 02:06:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 00:31:44.667 real 0m24.307s 00:31:44.667 user 4m29.443s 00:31:44.667 sys 0m7.810s 00:31:44.667 02:06:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 ************************************ 00:31:44.667 END TEST fio_dif_rand_params 00:31:44.667 ************************************ 00:31:44.667 02:06:30 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:31:44.667 02:06:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:31:44.667 02:06:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 ************************************ 00:31:44.667 START TEST fio_dif_digest 00:31:44.667 ************************************ 00:31:44.667 02:06:30 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:31:44.667 02:06:30 -- target/dif.sh@123 -- # local NULL_DIF 00:31:44.667 02:06:30 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:31:44.667 02:06:30 -- target/dif.sh@125 -- # local hdgst ddgst 00:31:44.667 02:06:30 -- target/dif.sh@127 -- # NULL_DIF=3 00:31:44.667 02:06:30 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:31:44.667 02:06:30 -- target/dif.sh@127 -- # numjobs=3 00:31:44.667 02:06:30 -- target/dif.sh@127 -- # iodepth=3 00:31:44.667 02:06:30 -- target/dif.sh@127 -- # runtime=10 00:31:44.667 02:06:30 -- target/dif.sh@128 -- # hdgst=true 00:31:44.667 02:06:30 -- target/dif.sh@128 -- # ddgst=true 00:31:44.667 02:06:30 -- target/dif.sh@130 -- # create_subsystems 0 00:31:44.667 02:06:30 -- target/dif.sh@28 -- # local sub 00:31:44.667 02:06:30 -- target/dif.sh@30 -- # for sub in "$@" 00:31:44.667 02:06:30 -- target/dif.sh@31 -- # create_subsystem 0 00:31:44.667 02:06:30 -- target/dif.sh@18 -- # local sub_id=0 00:31:44.667 02:06:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 bdev_null0 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:44.667 02:06:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:44.667 02:06:30 -- common/autotest_common.sh@10 -- # set +x 00:31:44.667 [2024-04-15 02:06:30.264488] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:44.667 02:06:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:44.667 02:06:30 -- target/dif.sh@131 -- # fio /dev/fd/62 00:31:44.667 02:06:30 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:31:44.667 02:06:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:44.667 02:06:30 -- nvmf/common.sh@520 -- # config=() 00:31:44.667 02:06:30 -- nvmf/common.sh@520 -- # local subsystem config 00:31:44.667 02:06:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:31:44.667 02:06:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:31:44.667 { 00:31:44.667 "params": { 00:31:44.667 "name": "Nvme$subsystem", 00:31:44.667 "trtype": "$TEST_TRANSPORT", 00:31:44.667 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.667 "adrfam": "ipv4", 00:31:44.667 "trsvcid": "$NVMF_PORT", 00:31:44.667 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.667 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.667 "hdgst": ${hdgst:-false}, 00:31:44.667 "ddgst": ${ddgst:-false} 00:31:44.667 }, 00:31:44.667 "method": "bdev_nvme_attach_controller" 00:31:44.667 } 00:31:44.667 EOF 00:31:44.667 )") 00:31:44.667 02:06:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.667 02:06:30 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.667 02:06:30 -- target/dif.sh@82 -- # gen_fio_conf 00:31:44.667 02:06:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:31:44.667 02:06:30 -- target/dif.sh@54 -- # local file 00:31:44.667 02:06:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.667 02:06:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:31:44.667 02:06:30 -- target/dif.sh@56 -- # cat 00:31:44.667 02:06:30 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.667 02:06:30 -- common/autotest_common.sh@1320 -- # shift 00:31:44.667 02:06:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:31:44.667 02:06:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.667 02:06:30 -- nvmf/common.sh@542 -- # cat 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.667 02:06:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:31:44.667 02:06:30 -- target/dif.sh@72 -- # (( file <= files )) 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:44.667 02:06:30 -- nvmf/common.sh@544 -- # jq . 00:31:44.667 02:06:30 -- nvmf/common.sh@545 -- # IFS=, 00:31:44.667 02:06:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:31:44.667 "params": { 00:31:44.667 "name": "Nvme0", 00:31:44.667 "trtype": "tcp", 00:31:44.667 "traddr": "10.0.0.2", 00:31:44.667 "adrfam": "ipv4", 00:31:44.667 "trsvcid": "4420", 00:31:44.667 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.667 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:44.667 "hdgst": true, 00:31:44.667 "ddgst": true 00:31:44.667 }, 00:31:44.667 "method": "bdev_nvme_attach_controller" 00:31:44.667 }' 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:44.667 02:06:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:44.667 02:06:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:31:44.667 02:06:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:31:44.667 02:06:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:31:44.667 02:06:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:44.667 02:06:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:44.925 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:44.925 ... 00:31:44.925 fio-3.35 00:31:44.925 Starting 3 threads 00:31:44.925 EAL: No free 2048 kB hugepages reported on node 1 00:31:45.492 [2024-04-15 02:06:30.999419] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:31:45.492 [2024-04-15 02:06:30.999499] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:31:57.703 00:31:57.703 filename0: (groupid=0, jobs=1): err= 0: pid=2304810: Mon Apr 15 02:06:41 2024 00:31:57.703 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10016msec) 00:31:57.703 slat (usec): min=4, max=103, avg=15.60, stdev= 4.92 00:31:57.703 clat (usec): min=7419, max=96140, avg=15224.20, stdev=11165.63 00:31:57.703 lat (usec): min=7432, max=96161, avg=15239.81, stdev=11165.91 00:31:57.703 clat percentiles (usec): 00:31:57.703 | 1.00th=[ 7767], 5.00th=[ 8586], 10.00th=[ 9241], 20.00th=[10290], 00:31:57.704 | 30.00th=[11207], 40.00th=[12256], 50.00th=[12911], 60.00th=[13435], 00:31:57.704 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15270], 95.00th=[52167], 00:31:57.704 | 99.00th=[55313], 99.50th=[55837], 99.90th=[92799], 99.95th=[95945], 00:31:57.704 | 99.99th=[95945] 00:31:57.704 bw ( KiB/s): min=19968, max=30976, per=37.94%, avg=25192.75, stdev=2866.67, samples=20 00:31:57.704 iops : min= 156, max= 242, avg=196.80, stdev=22.41, samples=20 00:31:57.704 lat (msec) : 10=17.71%, 20=75.19%, 50=0.05%, 100=7.05% 00:31:57.704 cpu : usr=93.41%, sys=6.09%, ctx=25, majf=0, minf=195 00:31:57.704 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 issued rwts: total=1971,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.704 filename0: (groupid=0, jobs=1): err= 0: pid=2304811: Mon Apr 15 02:06:41 2024 00:31:57.704 read: IOPS=155, BW=19.5MiB/s (20.4MB/s)(196MiB/10031msec) 00:31:57.704 slat (nsec): min=4458, max=53209, avg=20000.23, stdev=5078.06 00:31:57.704 clat (msec): min=7, max=100, avg=19.21, stdev=11.87 00:31:57.704 lat (msec): min=7, max=100, avg=19.23, stdev=11.87 00:31:57.704 clat percentiles (msec): 00:31:57.704 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 11], 20.00th=[ 13], 00:31:57.704 | 30.00th=[ 16], 40.00th=[ 17], 50.00th=[ 18], 60.00th=[ 18], 00:31:57.704 | 70.00th=[ 19], 80.00th=[ 20], 90.00th=[ 22], 95.00th=[ 56], 00:31:57.704 | 99.00th=[ 60], 99.50th=[ 61], 99.90th=[ 66], 99.95th=[ 101], 00:31:57.704 | 99.99th=[ 101] 00:31:57.704 bw ( KiB/s): min=15616, max=26368, per=30.09%, avg=19980.80, stdev=2999.83, samples=20 00:31:57.704 iops : min= 122, max= 206, avg=156.10, stdev=23.44, samples=20 00:31:57.704 lat (msec) : 10=8.44%, 20=75.96%, 50=7.29%, 100=8.25%, 250=0.06% 00:31:57.704 cpu : usr=94.71%, sys=4.82%, ctx=14, majf=0, minf=102 00:31:57.704 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 issued rwts: total=1564,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.704 filename0: (groupid=0, jobs=1): err= 0: pid=2304812: Mon Apr 15 02:06:41 2024 00:31:57.704 read: IOPS=166, BW=20.9MiB/s (21.9MB/s)(210MiB/10046msec) 00:31:57.704 slat (nsec): min=4177, max=42226, avg=15825.10, stdev=4984.67 00:31:57.704 clat (usec): min=7605, max=95289, avg=17925.76, stdev=13527.10 00:31:57.704 lat (usec): min=7617, max=95301, avg=17941.59, stdev=13526.98 00:31:57.704 clat percentiles (usec): 00:31:57.704 | 1.00th=[ 7832], 5.00th=[10028], 10.00th=[10683], 20.00th=[11863], 00:31:57.704 | 30.00th=[12911], 40.00th=[13435], 50.00th=[13960], 60.00th=[14353], 00:31:57.704 | 70.00th=[14746], 80.00th=[15270], 90.00th=[52167], 95.00th=[54264], 00:31:57.704 | 99.00th=[56361], 99.50th=[61604], 99.90th=[93848], 99.95th=[94897], 00:31:57.704 | 99.99th=[94897] 00:31:57.704 bw ( KiB/s): min=15104, max=26112, per=32.27%, avg=21429.45, stdev=3164.96, samples=20 00:31:57.704 iops : min= 118, max= 204, avg=167.40, stdev=24.72, samples=20 00:31:57.704 lat (msec) : 10=4.71%, 20=84.26%, 50=0.18%, 100=10.85% 00:31:57.704 cpu : usr=94.26%, sys=5.29%, ctx=20, majf=0, minf=163 00:31:57.704 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:57.704 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:57.704 issued rwts: total=1677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:57.704 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:57.704 00:31:57.704 Run status group 0 (all jobs): 00:31:57.704 READ: bw=64.9MiB/s (68.0MB/s), 19.5MiB/s-24.6MiB/s (20.4MB/s-25.8MB/s), io=652MiB (683MB), run=10016-10046msec 00:31:57.704 02:06:41 -- target/dif.sh@132 -- # destroy_subsystems 0 00:31:57.704 02:06:41 -- target/dif.sh@43 -- # local sub 00:31:57.704 02:06:41 -- target/dif.sh@45 -- # for sub in "$@" 00:31:57.704 02:06:41 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:57.704 02:06:41 -- target/dif.sh@36 -- # local sub_id=0 00:31:57.704 02:06:41 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:57.704 02:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.704 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:31:57.704 02:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.704 02:06:41 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:57.704 02:06:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:31:57.704 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:31:57.704 02:06:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:31:57.704 00:31:57.704 real 0m11.226s 00:31:57.704 user 0m29.570s 00:31:57.704 sys 0m1.880s 00:31:57.704 02:06:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:57.704 02:06:41 -- common/autotest_common.sh@10 -- # set +x 00:31:57.704 ************************************ 00:31:57.704 END TEST fio_dif_digest 00:31:57.704 ************************************ 00:31:57.704 02:06:41 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:31:57.704 02:06:41 -- target/dif.sh@147 -- # nvmftestfini 00:31:57.704 02:06:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:31:57.704 02:06:41 -- nvmf/common.sh@116 -- # sync 00:31:57.704 02:06:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:31:57.704 02:06:41 -- nvmf/common.sh@119 -- # set +e 00:31:57.704 02:06:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:31:57.704 02:06:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:31:57.704 rmmod nvme_tcp 00:31:57.704 rmmod nvme_fabrics 00:31:57.704 rmmod nvme_keyring 00:31:57.704 02:06:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:31:57.704 02:06:41 -- nvmf/common.sh@123 -- # set -e 00:31:57.704 02:06:41 -- nvmf/common.sh@124 -- # return 0 00:31:57.704 02:06:41 -- nvmf/common.sh@477 -- # '[' -n 2297847 ']' 00:31:57.704 02:06:41 -- nvmf/common.sh@478 -- # killprocess 2297847 00:31:57.704 02:06:41 -- common/autotest_common.sh@926 -- # '[' -z 2297847 ']' 00:31:57.704 02:06:41 -- common/autotest_common.sh@930 -- # kill -0 2297847 00:31:57.704 02:06:41 -- common/autotest_common.sh@931 -- # uname 00:31:57.704 02:06:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:31:57.704 02:06:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2297847 00:31:57.704 02:06:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:31:57.704 02:06:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:31:57.704 02:06:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2297847' 00:31:57.704 killing process with pid 2297847 00:31:57.704 02:06:41 -- common/autotest_common.sh@945 -- # kill 2297847 00:31:57.704 02:06:41 -- common/autotest_common.sh@950 -- # wait 2297847 00:31:57.704 02:06:41 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:31:57.704 02:06:41 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:57.704 Waiting for block devices as requested 00:31:57.704 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:31:57.704 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:57.704 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:57.704 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:57.704 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:57.996 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:57.996 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:57.996 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:57.996 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:57.996 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:58.257 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:58.257 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:58.257 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:58.257 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:58.516 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:58.516 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:58.516 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:58.776 02:06:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:31:58.776 02:06:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:31:58.776 02:06:44 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:58.776 02:06:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:31:58.776 02:06:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:58.776 02:06:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:58.776 02:06:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.687 02:06:46 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:00.687 00:32:00.687 real 1m7.178s 00:32:00.687 user 6m26.758s 00:32:00.687 sys 0m19.065s 00:32:00.687 02:06:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:00.687 02:06:46 -- common/autotest_common.sh@10 -- # set +x 00:32:00.687 ************************************ 00:32:00.687 END TEST nvmf_dif 00:32:00.687 ************************************ 00:32:00.687 02:06:46 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:00.687 02:06:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:00.687 02:06:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:00.687 02:06:46 -- common/autotest_common.sh@10 -- # set +x 00:32:00.687 ************************************ 00:32:00.687 START TEST nvmf_abort_qd_sizes 00:32:00.687 ************************************ 00:32:00.687 02:06:46 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:00.687 * Looking for test storage... 00:32:00.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:00.687 02:06:46 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:00.687 02:06:46 -- nvmf/common.sh@7 -- # uname -s 00:32:00.687 02:06:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:00.687 02:06:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:00.687 02:06:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:00.687 02:06:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:00.687 02:06:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:00.687 02:06:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:00.687 02:06:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:00.687 02:06:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:00.687 02:06:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:00.687 02:06:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:00.687 02:06:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.946 02:06:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:00.946 02:06:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:00.947 02:06:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:00.947 02:06:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:00.947 02:06:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:00.947 02:06:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:00.947 02:06:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:00.947 02:06:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:00.947 02:06:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.947 02:06:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.947 02:06:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.947 02:06:46 -- paths/export.sh@5 -- # export PATH 00:32:00.947 02:06:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:00.947 02:06:46 -- nvmf/common.sh@46 -- # : 0 00:32:00.947 02:06:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:32:00.947 02:06:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:32:00.947 02:06:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:32:00.947 02:06:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:00.947 02:06:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:00.947 02:06:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:32:00.947 02:06:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:32:00.947 02:06:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:32:00.947 02:06:46 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:32:00.947 02:06:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:32:00.947 02:06:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:00.947 02:06:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:32:00.947 02:06:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:32:00.947 02:06:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:32:00.947 02:06:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:00.947 02:06:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:00.947 02:06:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:00.947 02:06:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:32:00.947 02:06:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:32:00.947 02:06:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:32:00.947 02:06:46 -- common/autotest_common.sh@10 -- # set +x 00:32:02.850 02:06:48 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:32:02.850 02:06:48 -- nvmf/common.sh@290 -- # pci_devs=() 00:32:02.850 02:06:48 -- nvmf/common.sh@290 -- # local -a pci_devs 00:32:02.850 02:06:48 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:32:02.850 02:06:48 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:32:02.850 02:06:48 -- nvmf/common.sh@292 -- # pci_drivers=() 00:32:02.850 02:06:48 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:32:02.850 02:06:48 -- nvmf/common.sh@294 -- # net_devs=() 00:32:02.850 02:06:48 -- nvmf/common.sh@294 -- # local -ga net_devs 00:32:02.850 02:06:48 -- nvmf/common.sh@295 -- # e810=() 00:32:02.850 02:06:48 -- nvmf/common.sh@295 -- # local -ga e810 00:32:02.850 02:06:48 -- nvmf/common.sh@296 -- # x722=() 00:32:02.850 02:06:48 -- nvmf/common.sh@296 -- # local -ga x722 00:32:02.850 02:06:48 -- nvmf/common.sh@297 -- # mlx=() 00:32:02.850 02:06:48 -- nvmf/common.sh@297 -- # local -ga mlx 00:32:02.850 02:06:48 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:02.850 02:06:48 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:32:02.850 02:06:48 -- nvmf/common.sh@320 -- # [[ tcp == rdma ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@326 -- # [[ e810 == mlx5 ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@328 -- # [[ e810 == e810 ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@329 -- # pci_devs=("${e810[@]}") 00:32:02.850 02:06:48 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:32:02.850 02:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:02.850 02:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:02.850 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:02.850 02:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:02.850 02:06:48 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:32:02.850 02:06:48 -- nvmf/common.sh@340 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:02.850 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:02.851 02:06:48 -- nvmf/common.sh@341 -- # [[ ice == unknown ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@345 -- # [[ ice == unbound ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@349 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@351 -- # [[ tcp == rdma ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:32:02.851 02:06:48 -- nvmf/common.sh@371 -- # [[ e810 == e810 ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@371 -- # [[ tcp == rdma ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:02.851 02:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.851 02:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:02.851 02:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.851 02:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:02.851 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:02.851 02:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.851 02:06:48 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:32:02.851 02:06:48 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:02.851 02:06:48 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:32:02.851 02:06:48 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:02.851 02:06:48 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:02.851 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:02.851 02:06:48 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:32:02.851 02:06:48 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:32:02.851 02:06:48 -- nvmf/common.sh@402 -- # is_hw=yes 00:32:02.851 02:06:48 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@405 -- # [[ tcp == tcp ]] 00:32:02.851 02:06:48 -- nvmf/common.sh@406 -- # nvmf_tcp_init 00:32:02.851 02:06:48 -- nvmf/common.sh@228 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:02.851 02:06:48 -- nvmf/common.sh@229 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:02.851 02:06:48 -- nvmf/common.sh@230 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:02.851 02:06:48 -- nvmf/common.sh@233 -- # (( 2 > 1 )) 00:32:02.851 02:06:48 -- nvmf/common.sh@235 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:02.851 02:06:48 -- nvmf/common.sh@236 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:02.851 02:06:48 -- nvmf/common.sh@239 -- # NVMF_SECOND_TARGET_IP= 00:32:02.851 02:06:48 -- nvmf/common.sh@241 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:02.851 02:06:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:02.851 02:06:48 -- nvmf/common.sh@243 -- # ip -4 addr flush cvl_0_0 00:32:02.851 02:06:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_1 00:32:02.851 02:06:48 -- nvmf/common.sh@247 -- # ip netns add cvl_0_0_ns_spdk 00:32:02.851 02:06:48 -- nvmf/common.sh@250 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:02.851 02:06:48 -- nvmf/common.sh@253 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:02.851 02:06:48 -- nvmf/common.sh@254 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:02.851 02:06:48 -- nvmf/common.sh@257 -- # ip link set cvl_0_1 up 00:32:02.851 02:06:48 -- nvmf/common.sh@259 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:02.851 02:06:48 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:02.851 02:06:48 -- nvmf/common.sh@263 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:02.851 02:06:48 -- nvmf/common.sh@266 -- # ping -c 1 10.0.0.2 00:32:02.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:02.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:32:02.851 00:32:02.851 --- 10.0.0.2 ping statistics --- 00:32:02.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.851 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:32:02.851 02:06:48 -- nvmf/common.sh@267 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:02.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:02.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:32:02.851 00:32:02.851 --- 10.0.0.1 ping statistics --- 00:32:02.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:02.851 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:32:02.851 02:06:48 -- nvmf/common.sh@269 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:02.851 02:06:48 -- nvmf/common.sh@410 -- # return 0 00:32:02.851 02:06:48 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:32:02.851 02:06:48 -- nvmf/common.sh@439 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:04.229 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:04.229 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:04.229 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:05.168 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:05.168 02:06:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:05.168 02:06:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:32:05.168 02:06:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:32:05.168 02:06:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:05.168 02:06:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:32:05.168 02:06:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:32:05.168 02:06:50 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:32:05.168 02:06:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:32:05.168 02:06:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:05.168 02:06:50 -- common/autotest_common.sh@10 -- # set +x 00:32:05.427 02:06:50 -- nvmf/common.sh@469 -- # nvmfpid=2309702 00:32:05.427 02:06:50 -- nvmf/common.sh@468 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:05.427 02:06:50 -- nvmf/common.sh@470 -- # waitforlisten 2309702 00:32:05.427 02:06:50 -- common/autotest_common.sh@819 -- # '[' -z 2309702 ']' 00:32:05.427 02:06:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:05.427 02:06:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:32:05.427 02:06:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:05.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:05.427 02:06:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:32:05.427 02:06:50 -- common/autotest_common.sh@10 -- # set +x 00:32:05.427 [2024-04-15 02:06:50.861219] Starting SPDK v24.01.1-pre git sha1 3b33f4333 / DPDK 23.11.0 initialization... 00:32:05.427 [2024-04-15 02:06:50.861297] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:05.427 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.427 [2024-04-15 02:06:50.929041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:05.427 [2024-04-15 02:06:51.020073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:32:05.427 [2024-04-15 02:06:51.020207] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:05.427 [2024-04-15 02:06:51.020224] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:05.427 [2024-04-15 02:06:51.020237] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:05.427 [2024-04-15 02:06:51.020291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:05.427 [2024-04-15 02:06:51.020354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:05.427 [2024-04-15 02:06:51.020420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:05.427 [2024-04-15 02:06:51.020422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.365 02:06:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:32:06.365 02:06:51 -- common/autotest_common.sh@852 -- # return 0 00:32:06.365 02:06:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:32:06.365 02:06:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:06.365 02:06:51 -- common/autotest_common.sh@10 -- # set +x 00:32:06.365 02:06:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:32:06.365 02:06:51 -- scripts/common.sh@311 -- # local bdf bdfs 00:32:06.365 02:06:51 -- scripts/common.sh@312 -- # local nvmes 00:32:06.365 02:06:51 -- scripts/common.sh@314 -- # [[ -n 0000:88:00.0 ]] 00:32:06.365 02:06:51 -- scripts/common.sh@315 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:06.365 02:06:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:32:06.365 02:06:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:32:06.365 02:06:51 -- scripts/common.sh@322 -- # uname -s 00:32:06.365 02:06:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:32:06.365 02:06:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:32:06.365 02:06:51 -- scripts/common.sh@327 -- # (( 1 )) 00:32:06.365 02:06:51 -- scripts/common.sh@328 -- # printf '%s\n' 0000:88:00.0 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@79 -- # (( 1 > 0 )) 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:88:00.0 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:32:06.365 02:06:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:06.365 02:06:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:06.365 02:06:51 -- common/autotest_common.sh@10 -- # set +x 00:32:06.365 ************************************ 00:32:06.365 START TEST spdk_target_abort 00:32:06.365 ************************************ 00:32:06.365 02:06:51 -- common/autotest_common.sh@1104 -- # spdk_target 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:06.365 02:06:51 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:32:06.365 02:06:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:06.365 02:06:51 -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 spdk_targetn1 00:32:09.652 02:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:09.652 02:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.652 02:06:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 [2024-04-15 02:06:54.696803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:09.652 02:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:32:09.652 02:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.652 02:06:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 02:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:32:09.652 02:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.652 02:06:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 02:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:32:09.652 02:06:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:09.652 02:06:54 -- common/autotest_common.sh@10 -- # set +x 00:32:09.652 [2024-04-15 02:06:54.729082] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:09.652 02:06:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:09.652 02:06:54 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:09.652 EAL: No free 2048 kB hugepages reported on node 1 00:32:12.942 Initializing NVMe Controllers 00:32:12.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:12.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:12.942 Initialization complete. Launching workers. 00:32:12.942 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 7580, failed: 0 00:32:12.942 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1280, failed to submit 6300 00:32:12.942 success 850, unsuccess 430, failed 0 00:32:12.942 02:06:57 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:12.942 02:06:57 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:12.942 EAL: No free 2048 kB hugepages reported on node 1 00:32:16.229 Initializing NVMe Controllers 00:32:16.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:16.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:16.229 Initialization complete. Launching workers. 00:32:16.229 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8619, failed: 0 00:32:16.229 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1234, failed to submit 7385 00:32:16.229 success 322, unsuccess 912, failed 0 00:32:16.229 02:07:01 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:16.229 02:07:01 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:32:16.229 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.792 Initializing NVMe Controllers 00:32:18.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:32:18.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:32:18.792 Initialization complete. Launching workers. 00:32:18.792 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 32182, failed: 0 00:32:18.792 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2713, failed to submit 29469 00:32:18.792 success 538, unsuccess 2175, failed 0 00:32:18.792 02:07:04 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:32:18.792 02:07:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.792 02:07:04 -- common/autotest_common.sh@10 -- # set +x 00:32:18.792 02:07:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:18.792 02:07:04 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:18.792 02:07:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:32:18.792 02:07:04 -- common/autotest_common.sh@10 -- # set +x 00:32:20.170 02:07:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:32:20.170 02:07:05 -- target/abort_qd_sizes.sh@62 -- # killprocess 2309702 00:32:20.170 02:07:05 -- common/autotest_common.sh@926 -- # '[' -z 2309702 ']' 00:32:20.170 02:07:05 -- common/autotest_common.sh@930 -- # kill -0 2309702 00:32:20.170 02:07:05 -- common/autotest_common.sh@931 -- # uname 00:32:20.170 02:07:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:32:20.170 02:07:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 2309702 00:32:20.170 02:07:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:32:20.170 02:07:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:32:20.170 02:07:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 2309702' 00:32:20.170 killing process with pid 2309702 00:32:20.170 02:07:05 -- common/autotest_common.sh@945 -- # kill 2309702 00:32:20.170 02:07:05 -- common/autotest_common.sh@950 -- # wait 2309702 00:32:20.429 00:32:20.429 real 0m14.185s 00:32:20.429 user 0m55.782s 00:32:20.429 sys 0m2.782s 00:32:20.429 02:07:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:20.429 02:07:06 -- common/autotest_common.sh@10 -- # set +x 00:32:20.429 ************************************ 00:32:20.429 END TEST spdk_target_abort 00:32:20.429 ************************************ 00:32:20.429 02:07:06 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:32:20.429 02:07:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:32:20.429 02:07:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:32:20.429 02:07:06 -- common/autotest_common.sh@10 -- # set +x 00:32:20.429 ************************************ 00:32:20.429 START TEST kernel_target_abort 00:32:20.429 ************************************ 00:32:20.429 02:07:06 -- common/autotest_common.sh@1104 -- # kernel_target 00:32:20.429 02:07:06 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:32:20.429 02:07:06 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:32:20.429 02:07:06 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:32:20.429 02:07:06 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:32:20.429 02:07:06 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:32:20.429 02:07:06 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:20.429 02:07:06 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:20.429 02:07:06 -- nvmf/common.sh@627 -- # local block nvme 00:32:20.429 02:07:06 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:32:20.429 02:07:06 -- nvmf/common.sh@630 -- # modprobe nvmet 00:32:20.688 02:07:06 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:20.688 02:07:06 -- nvmf/common.sh@635 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:21.625 Waiting for block devices as requested 00:32:21.625 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:21.883 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:21.883 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:21.883 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.144 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.144 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.144 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.144 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.403 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.403 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:22.403 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:22.403 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:22.662 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:22.662 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:22.662 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:22.662 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:22.662 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:22.921 02:07:08 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:32:22.921 02:07:08 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:22.921 02:07:08 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:32:22.921 02:07:08 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:32:22.921 02:07:08 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:22.921 No valid GPT data, bailing 00:32:22.921 02:07:08 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:22.921 02:07:08 -- scripts/common.sh@393 -- # pt= 00:32:22.921 02:07:08 -- scripts/common.sh@394 -- # return 1 00:32:22.921 02:07:08 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:32:22.921 02:07:08 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme0n1 ]] 00:32:22.921 02:07:08 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:22.921 02:07:08 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:22.921 02:07:08 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:22.921 02:07:08 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:32:22.921 02:07:08 -- nvmf/common.sh@654 -- # echo 1 00:32:22.921 02:07:08 -- nvmf/common.sh@655 -- # echo /dev/nvme0n1 00:32:22.921 02:07:08 -- nvmf/common.sh@656 -- # echo 1 00:32:22.921 02:07:08 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:32:22.921 02:07:08 -- nvmf/common.sh@663 -- # echo tcp 00:32:22.921 02:07:08 -- nvmf/common.sh@664 -- # echo 4420 00:32:22.921 02:07:08 -- nvmf/common.sh@665 -- # echo ipv4 00:32:22.921 02:07:08 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:22.921 02:07:08 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:22.921 00:32:22.921 Discovery Log Number of Records 2, Generation counter 2 00:32:22.921 =====Discovery Log Entry 0====== 00:32:22.921 trtype: tcp 00:32:22.921 adrfam: ipv4 00:32:22.921 subtype: current discovery subsystem 00:32:22.921 treq: not specified, sq flow control disable supported 00:32:22.921 portid: 1 00:32:22.921 trsvcid: 4420 00:32:22.921 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:22.921 traddr: 10.0.0.1 00:32:22.921 eflags: none 00:32:22.921 sectype: none 00:32:22.921 =====Discovery Log Entry 1====== 00:32:22.921 trtype: tcp 00:32:22.921 adrfam: ipv4 00:32:22.921 subtype: nvme subsystem 00:32:22.921 treq: not specified, sq flow control disable supported 00:32:22.921 portid: 1 00:32:22.921 trsvcid: 4420 00:32:22.921 subnqn: kernel_target 00:32:22.921 traddr: 10.0.0.1 00:32:22.921 eflags: none 00:32:22.921 sectype: none 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:22.921 02:07:08 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:22.921 EAL: No free 2048 kB hugepages reported on node 1 00:32:26.208 Initializing NVMe Controllers 00:32:26.208 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:26.208 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:26.208 Initialization complete. Launching workers. 00:32:26.208 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 22562, failed: 0 00:32:26.208 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 22562, failed to submit 0 00:32:26.208 success 0, unsuccess 22562, failed 0 00:32:26.208 02:07:11 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:26.208 02:07:11 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:26.208 EAL: No free 2048 kB hugepages reported on node 1 00:32:29.492 Initializing NVMe Controllers 00:32:29.492 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:29.492 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:29.492 Initialization complete. Launching workers. 00:32:29.492 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 48769, failed: 0 00:32:29.492 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 12282, failed to submit 36487 00:32:29.492 success 0, unsuccess 12282, failed 0 00:32:29.492 02:07:14 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:29.492 02:07:14 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:32:29.492 EAL: No free 2048 kB hugepages reported on node 1 00:32:32.777 Initializing NVMe Controllers 00:32:32.777 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:32:32.777 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:32:32.777 Initialization complete. Launching workers. 00:32:32.777 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 47905, failed: 0 00:32:32.777 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 11950, failed to submit 35955 00:32:32.777 success 0, unsuccess 11950, failed 0 00:32:32.777 02:07:17 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:32:32.777 02:07:17 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:32:32.777 02:07:17 -- nvmf/common.sh@677 -- # echo 0 00:32:32.777 02:07:17 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:32:32.777 02:07:17 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:32:32.777 02:07:17 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:32.777 02:07:17 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:32:32.777 02:07:17 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:32:32.777 02:07:17 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:32:32.777 00:32:32.777 real 0m11.846s 00:32:32.777 user 0m3.319s 00:32:32.777 sys 0m2.614s 00:32:32.777 02:07:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:32.777 02:07:17 -- common/autotest_common.sh@10 -- # set +x 00:32:32.777 ************************************ 00:32:32.777 END TEST kernel_target_abort 00:32:32.777 ************************************ 00:32:32.777 02:07:17 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:32:32.777 02:07:17 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:32:32.777 02:07:17 -- nvmf/common.sh@476 -- # nvmfcleanup 00:32:32.777 02:07:17 -- nvmf/common.sh@116 -- # sync 00:32:32.777 02:07:17 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:32:32.777 02:07:17 -- nvmf/common.sh@119 -- # set +e 00:32:32.777 02:07:17 -- nvmf/common.sh@120 -- # for i in {1..20} 00:32:32.777 02:07:17 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:32:32.777 rmmod nvme_tcp 00:32:32.777 rmmod nvme_fabrics 00:32:32.777 rmmod nvme_keyring 00:32:32.777 02:07:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:32:32.777 02:07:17 -- nvmf/common.sh@123 -- # set -e 00:32:32.777 02:07:17 -- nvmf/common.sh@124 -- # return 0 00:32:32.777 02:07:17 -- nvmf/common.sh@477 -- # '[' -n 2309702 ']' 00:32:32.777 02:07:17 -- nvmf/common.sh@478 -- # killprocess 2309702 00:32:32.777 02:07:17 -- common/autotest_common.sh@926 -- # '[' -z 2309702 ']' 00:32:32.777 02:07:17 -- common/autotest_common.sh@930 -- # kill -0 2309702 00:32:32.777 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 930: kill: (2309702) - No such process 00:32:32.777 02:07:17 -- common/autotest_common.sh@953 -- # echo 'Process with pid 2309702 is not found' 00:32:32.777 Process with pid 2309702 is not found 00:32:32.777 02:07:17 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:32:32.777 02:07:17 -- nvmf/common.sh@481 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:33.714 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:33.714 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.714 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.714 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.714 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.714 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.714 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.714 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.714 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:33.714 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:33.714 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:33.714 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:33.714 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:33.714 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:33.715 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:33.715 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:33.715 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:33.975 02:07:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:32:33.975 02:07:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:32:33.975 02:07:19 -- nvmf/common.sh@273 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:33.975 02:07:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:32:33.975 02:07:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:33.975 02:07:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:33.975 02:07:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:35.883 02:07:21 -- nvmf/common.sh@278 -- # ip -4 addr flush cvl_0_1 00:32:35.883 00:32:35.883 real 0m35.128s 00:32:35.883 user 1m1.441s 00:32:35.883 sys 0m8.789s 00:32:35.883 02:07:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:35.883 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:32:35.883 ************************************ 00:32:35.883 END TEST nvmf_abort_qd_sizes 00:32:35.883 ************************************ 00:32:35.883 02:07:21 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:32:35.883 02:07:21 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:32:35.883 02:07:21 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:32:35.883 02:07:21 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:32:35.883 02:07:21 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:32:35.883 02:07:21 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:32:35.883 02:07:21 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:32:35.883 02:07:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:32:35.883 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:32:35.883 02:07:21 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:32:35.883 02:07:21 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:32:35.883 02:07:21 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:32:35.883 02:07:21 -- common/autotest_common.sh@10 -- # set +x 00:32:37.820 INFO: APP EXITING 00:32:37.820 INFO: killing all VMs 00:32:37.820 INFO: killing vhost app 00:32:37.820 INFO: EXIT DONE 00:32:38.753 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:32:38.753 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:32:38.753 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:32:38.753 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:32:38.753 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:32:38.753 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:32:38.753 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:32:38.753 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:32:38.753 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:32:38.753 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:32:38.753 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:32:38.753 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:32:38.753 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:32:38.753 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:32:38.753 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:32:38.753 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:32:39.011 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:32:40.388 Cleaning 00:32:40.388 Removing: /var/run/dpdk/spdk0/config 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:32:40.388 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:32:40.388 Removing: /var/run/dpdk/spdk0/hugepage_info 00:32:40.388 Removing: /var/run/dpdk/spdk1/config 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:32:40.388 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:32:40.388 Removing: /var/run/dpdk/spdk1/hugepage_info 00:32:40.388 Removing: /var/run/dpdk/spdk1/mp_socket 00:32:40.388 Removing: /var/run/dpdk/spdk2/config 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:32:40.388 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:32:40.388 Removing: /var/run/dpdk/spdk2/hugepage_info 00:32:40.388 Removing: /var/run/dpdk/spdk3/config 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:32:40.388 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:32:40.388 Removing: /var/run/dpdk/spdk3/hugepage_info 00:32:40.388 Removing: /var/run/dpdk/spdk4/config 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:32:40.388 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:32:40.388 Removing: /var/run/dpdk/spdk4/hugepage_info 00:32:40.388 Removing: /dev/shm/bdev_svc_trace.1 00:32:40.388 Removing: /dev/shm/nvmf_trace.0 00:32:40.388 Removing: /dev/shm/spdk_tgt_trace.pid2035838 00:32:40.388 Removing: /var/run/dpdk/spdk0 00:32:40.388 Removing: /var/run/dpdk/spdk1 00:32:40.388 Removing: /var/run/dpdk/spdk2 00:32:40.388 Removing: /var/run/dpdk/spdk3 00:32:40.388 Removing: /var/run/dpdk/spdk4 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2034145 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2034889 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2035838 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2036318 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2037542 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2038487 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2038788 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2038996 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2039326 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2039525 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2039688 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2039854 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2040142 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2040479 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2043012 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2043192 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2043492 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2043628 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2043945 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2044085 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2044496 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2044533 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2044831 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2044969 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2045139 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2045277 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2045655 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2045810 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046004 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046299 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046328 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046389 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046646 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046810 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2046955 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2047110 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2047374 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2047535 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2047679 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2047838 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2048082 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2048276 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2048498 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2048672 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2048921 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2049090 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2049236 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2049798 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2050114 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2050323 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2050463 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2050625 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2050809 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051046 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051190 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051350 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051511 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051773 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2051911 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2052076 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2052245 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2052494 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2052638 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2052799 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2053068 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2053229 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2053376 00:32:40.388 Removing: /var/run/dpdk/spdk_pid2053538 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2053796 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2053962 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2054100 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2054326 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2054450 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2054653 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2056855 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2112852 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2115499 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2121363 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2124715 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2127366 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2127905 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2131793 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2131795 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2132408 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2133032 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2133707 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2134118 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2134127 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2134270 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2134408 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2134415 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2135081 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2135767 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2136324 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2136729 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2136861 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2137005 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2138052 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2139035 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2145150 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2145433 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2147982 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2151876 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2153988 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2160552 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2166017 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2167242 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2167924 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2179049 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2181285 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2184107 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2185326 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2186702 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2186969 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2187126 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2187271 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2187865 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2189334 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2190244 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2190698 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2194199 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2197772 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2201433 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2225430 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2228154 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2231975 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2232960 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2234111 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2237401 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2239802 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2244181 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2244194 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2247121 00:32:40.389 Removing: /var/run/dpdk/spdk_pid2247265 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2247400 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2247799 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2247805 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2248910 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2250127 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2251344 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2252565 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2253781 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2255039 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2258736 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2259193 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2260245 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2260854 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2264365 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2266410 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2270643 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2274392 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2277930 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2278352 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2278784 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2279309 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2279780 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2280336 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2280889 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2281422 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2283978 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2284246 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2287984 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2288164 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2289932 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2295078 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2295086 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2298034 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2299470 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2301019 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2302407 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2303730 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2304628 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2310140 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2310544 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2310948 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2312430 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2312839 00:32:40.648 Removing: /var/run/dpdk/spdk_pid2313252 00:32:40.648 Clean 00:32:40.648 killing process with pid 2005945 00:32:48.771 killing process with pid 2005942 00:32:48.771 killing process with pid 2005944 00:32:48.771 killing process with pid 2005943 00:32:48.771 02:07:34 -- common/autotest_common.sh@1436 -- # return 0 00:32:48.771 02:07:34 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:32:48.771 02:07:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.771 02:07:34 -- common/autotest_common.sh@10 -- # set +x 00:32:48.771 02:07:34 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:32:48.771 02:07:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:32:48.771 02:07:34 -- common/autotest_common.sh@10 -- # set +x 00:32:48.771 02:07:34 -- spdk/autotest.sh@390 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:32:48.771 02:07:34 -- spdk/autotest.sh@392 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:32:48.771 02:07:34 -- spdk/autotest.sh@392 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:32:48.771 02:07:34 -- spdk/autotest.sh@394 -- # hash lcov 00:32:48.771 02:07:34 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:32:48.771 02:07:34 -- spdk/autotest.sh@396 -- # hostname 00:32:48.771 02:07:34 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:32:48.771 geninfo: WARNING: invalid characters removed from testname! 00:33:15.342 02:08:00 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:19.534 02:08:04 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:22.103 02:08:07 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:24.647 02:08:10 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:27.944 02:08:12 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:30.481 02:08:15 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:33.016 02:08:18 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:33.016 02:08:18 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:33.016 02:08:18 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:33.016 02:08:18 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:33.016 02:08:18 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:33.016 02:08:18 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.016 02:08:18 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.016 02:08:18 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.016 02:08:18 -- paths/export.sh@5 -- $ export PATH 00:33:33.016 02:08:18 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:33.016 02:08:18 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:33.016 02:08:18 -- common/autobuild_common.sh@435 -- $ date +%s 00:33:33.016 02:08:18 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1713139698.XXXXXX 00:33:33.016 02:08:18 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1713139698.fCE9H5 00:33:33.016 02:08:18 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:33:33.016 02:08:18 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:33:33.016 02:08:18 -- common/autobuild_common.sh@442 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:33:33.016 02:08:18 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:33:33.016 02:08:18 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:33.016 02:08:18 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:33.016 02:08:18 -- common/autobuild_common.sh@451 -- $ get_config_params 00:33:33.016 02:08:18 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:33:33.016 02:08:18 -- common/autotest_common.sh@10 -- $ set +x 00:33:33.016 02:08:18 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:33:33.016 02:08:18 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:33.016 02:08:18 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:33.016 02:08:18 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:33.016 02:08:18 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:33:33.016 02:08:18 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:33.016 02:08:18 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:33.016 02:08:18 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:33.016 02:08:18 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:33.016 02:08:18 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:33.016 02:08:18 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:33.016 + [[ -n 1951487 ]] 00:33:33.016 + sudo kill 1951487 00:33:33.025 [Pipeline] } 00:33:33.042 [Pipeline] // stage 00:33:33.046 [Pipeline] } 00:33:33.062 [Pipeline] // timeout 00:33:33.068 [Pipeline] } 00:33:33.083 [Pipeline] // catchError 00:33:33.088 [Pipeline] } 00:33:33.104 [Pipeline] // wrap 00:33:33.110 [Pipeline] } 00:33:33.124 [Pipeline] // catchError 00:33:33.132 [Pipeline] stage 00:33:33.134 [Pipeline] { (Epilogue) 00:33:33.147 [Pipeline] catchError 00:33:33.148 [Pipeline] { 00:33:33.162 [Pipeline] echo 00:33:33.163 Cleanup processes 00:33:33.168 [Pipeline] sh 00:33:33.454 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:33.454 2325218 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:33.468 [Pipeline] sh 00:33:33.754 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:33.754 ++ grep -v 'sudo pgrep' 00:33:33.754 ++ awk '{print $1}' 00:33:33.754 + sudo kill -9 00:33:33.754 + true 00:33:33.767 [Pipeline] sh 00:33:34.054 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:44.059 [Pipeline] sh 00:33:44.347 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:44.347 Artifacts sizes are good 00:33:44.363 [Pipeline] archiveArtifacts 00:33:44.370 Archiving artifacts 00:33:44.583 [Pipeline] sh 00:33:44.871 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:44.887 [Pipeline] cleanWs 00:33:44.898 [WS-CLEANUP] Deleting project workspace... 00:33:44.898 [WS-CLEANUP] Deferred wipeout is used... 00:33:44.906 [WS-CLEANUP] done 00:33:44.908 [Pipeline] } 00:33:44.927 [Pipeline] // catchError 00:33:44.938 [Pipeline] sh 00:33:45.222 + logger -p user.info -t JENKINS-CI 00:33:45.231 [Pipeline] } 00:33:45.245 [Pipeline] // stage 00:33:45.250 [Pipeline] } 00:33:45.264 [Pipeline] // node 00:33:45.269 [Pipeline] End of Pipeline 00:33:45.319 Finished: SUCCESS